Question
Asked 30th Jun, 2015

In meta-analysis 3RCTs, why does the Cochrane h'book ref 17.7 discourage using means difference?

In meta-analysis 3RCTs, unlucky random baseline(+SD) & end(+SD) PRO psychometric scores. Cochrane h'book ref 17.7 discourages using means difference? Non-significant result when outcome measure is used alone, significant benefit shown if the change in measure is used, due to unlucky randomisation (all three had worse baseline for treatment arm thus reducing the end score - despite significant improvement). H'book ref 16.1.3.2 advises imputing an SD for the change but worst case assumptions for correlation still just approximate the average of baseline/end SDs.
This issue also sent to Gotzsche and Glasziou, as benefit is 'obvious', but conclusion is negative!

All Answers (3)

30th Jun, 2015
Darren C Greenwood
University of Leeds
Not sure if any of these thoughts cover what you're looking for:
There are (at least) 3 options for analysing the data: (1) just use outcomes at end of study, (2) use change from baseline, or (3) use outcomes at end, adjusting for baseline. The third option is more powerful and makes fewer assumptions.
In the context of meta-analysis, we want everything to be somehow on the same scale. So may be you want all estimates included to be estimates of the difference between the means adjusting for any baseline. Both (1) and (2) approximate to this in the long-run, so just use the one that's closest / that has the information presented.
Final thought is that if it really is "unlucky randomisation" then in the long-run, there'll be studies that are unlucky in the other direction. So maybe in the context of meta-analysis, this is just a bit of expected heterogeneity, and nothing to be scared of?
But these thoughts may not touch on what you were trying to ask. Maybe you could spell it out a bit more for me if I haven't grasped what you meant.
1 Recommendation
30th Jun, 2015
Geoff Kirwood
Deakin University
Concurr, Dr Greenwood. If you ruled the world then (3) ANCOVA would be mandated for all trials, and my dilemma disappears. (1) was used in the published review, and NS result (CI -6 to 1). However a -17 to -2 result from option (2) is pictured, even though imputing a worst-case SD of the difference as being ~1.4 times the highly correlated baseline & end SDs.
I think my problem is with the formula in the Cochrane handbook 16.1.3.2 being so much less conservative than the 1.4*SD [using SEdiff = sqrt (SEbase2 + SEend2)] where base & end variance are near same. Their result is an SD approximating zero - clearly a nonsense.

Similar questions and discussions

How do I perform meta-analysis of single arm studies?
Question
15 answers
  • Victor EjigahVictor Ejigah
I tried performing a meta-analysis of single arm studies (i.e without controls) using openMeta-analyst which allowed me to combine the effect estimate in form of proportions. Different organ transplants with similar endpoint were included from various studies. I was able to obtain
1) the overall estimate of all organs
2) perform sub-group analysis to find the estimate for specific organs, timing of treatment(<7days and >7 days), study design and availability of insurance cover or not
3) Did meta-regression to assess impact of covariates like timing of treatment
My PI wants a direct comparison of the point estimates from the subgroups already meta-analyzed. Is this a good practice and how can I do this without controls? I thought of using one organ say heart as the intervention and liver as control and then including by the number of events/total number of subjects for studies that provided data. For the corresponding control or intervention without values (since this direct comparison was not done in individual studies), I used zero and then corrected with 0.5 automatically which the software handles pretty well. Is this an ideal way to go in order to obtain the RR or OR across different sub-groups?
Please see below a schema of what I did:
Study organ 1 organ 2
A 0/0 6/20
B 0/0 4/9
C 0/0 8/23
D 6/21 0/0
E 34/45 0/0
F 12/50 0/0
ABC don't have information on organ 1 while DEF don't have info on organ 2. I am hoping this set up can help me unravel the difference between organs for a specific outcome measured.
I would appreciate your urgent response.
How should similar studies by the same authors be assessed in a systematic review?
Question
8 answers
  • Pasquale BalzanPasquale Balzan
Hi All,
I am currently undertaking a systematic review. Two of the studies, completed by nearly the same authors, seem to have used the same set of participants.
The differences mainly lies in the study design:
Study 1 included a double-blind, randomized, sham controlled trial (5 days/ week for 2 weeks) in twenty patients with 3:2 ratio. At baseline, each patient underwent a clinical evaluation. Assessments were then carried out immediately after either shaml or real stimulation (post-stimulation, T1), at one-month (T2) and at three-months follow-up (T3).
Study 2 included a randomized, double-blind, sham-controlled, crossover trial (5 d/wk for 2 weeks) in 20 patients with a 1:1 ratio. Each patient underwent a clinical evaluation before and after real or sham stimulation. A follow-up evaluation was performed at 1 and 3 months with a crossover washout period of 3 months. After the washout, After a washout period of 3 months after the last visit (i.e., T3), each patient received the opposite treatment (crossover phase)
and underwent the same standardized assessment as in the first
phase, at baseline, at 2 weeks, at 1 month, and at 3 months
Both studies made use of the same outcome measures and data analysis involves identical data analysis measures.
What is your view on the below:
1. Should both studies be included in the review?
Since the authors seem to have made use of the same participants (same research authors, same institution, identical sample size, same outcome measurements), could the inclusion of the two studies bias the findings of the review?
Thank you!
Pasquale

Related Publications

Article
Full-text available
Randomised trials vary in methodological quality, and flaws in trial conduct can lead to biased estimation of the intervention effect. If a meta-analysis makes no allowance for methodological flaws, there's a danger that the results could be biased and over-precise, or that some lower-quality trials could be given too much influence in the meta-ana...
Article
Full-text available
Gornall’s representation of the evidence on e-cigarettes is fairly one sided and negative.1 For example, he cites a “recent meta-analysis” on the effectiveness of e-cigarettes, which was posted on a blog and has not been peer-reviewed2 but fails to cite the Cochrane review on the same topic, which was published just three months earlier.3 While not...
Got a technical question?
Get high-quality answers from experts.