Article

Statistical malpractice

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Statistical malpractice is an insidious, and indeed prestige-laden and grant-rewarded, activity. Brilliantly clever, but fundamentally wrong-headed, number-crunchers are encouraged to devise inappropriate applications of mathematical methods to health problems. This species of misdirected zealot has so far been immune from criticism.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... On top of this, the patients in a megatrial population are always prognostically heterogeneous, because the methodology uses deliberately simplified protocols designed to optimize recruitment rather than control - and meta-analyses are even more heterogeneous [3,8]. In a megatrial that shows an overall benefit, it is very probable that while the outcome for some patients will be improved by treatment, other patients will be made worse, and others will be unaffected. ...
... Before megatrials could become so widely and profoundly misunderstood, it was necessary that the statistical aspects of research should become wildly overvalued. Properly, statistics is a means to the end of scientific understanding [3] - and when studying medical interventions, the nature of scientific understanding could be termed 'clinical science' - an enterprise for which the qualifications would include knowledge of disease and experience of patients [1]. People with such qualifications would provide the basis for a leadership role in research into the effectiveness of drugs and other technologies. ...
Article
Full-text available
The fundamental methodological deficiency of megatrials is deliberate reduction of experimental control in order to maximize recruitment and compliance of subjects. Hence, typical megatrials recruit pathologically and prognostically heterogeneous subjects, and protocols typically fail to exclude significant confounders. Therefore, most megatrials do not test a scientific hypothesis, nor are they informative about individual patients. The proper function of a megatrial is precise measurement of effect size for a therapeutic intervention. Valid megatrials can be designed only when simplification can be achieved without significantly affecting experimental control. Megatrials should be conducted only at the end of a long process of therapeutic development, and must always be designed and interpreted in the context of relevant scientific and clinical information.
... Indeed reflex hostility seems largely confined to those who misunderstand and overestimate the role of statistics in science. 8 While clinical case studies are barred from some medical journals, psychological case studies of patients with unusual brain lesions are frequently published in the most prestigious 'pure' scientific journals such as Science and Nature. 9,10 The nature of formal case studies involves two crucial methodological principles. 1 First, developing and deriving a scientific theory of sufficient precision to have implications for individual cases. ...
Article
The critique of EBM is not meant to discard EBM in general, rather to challenge its shortcomings and to support its positive intentions by introducing a philosophy of evidence, and providing a constructive critique of the concept, methods, and its findings. The EBM research studies are based largely on complex mathematical and statistical data and data analysis. Statistics do not give clinical results, but only statistical results. When we quantify we typically remove all of the qualities from the individual. Evidence answers the question of how we know something. We need philosophical analysis to determine what evidence is. This is philosophical evidence-based medicine. The problem of placebo in EBM is not resolved yet. Placebo is defined in this chapter as the positive assessments and emotions one has, and these do have a bodily effect.
Article
A powerful impetus behind the rise of the ‘megatrial’ (a large, simple, usually multi-centred randomized controlled trial analysed by ‘intention to treat’) has been the desire for ever-increasing precision in the measurement of therapeutic effectiveness. However, the demand for precision has been allowed to override other and more important methodological considerations. Megatrials have progressively abandoned the pursuit of scientifically rigorous experimentation, valid measurement and optimal epidemiological sampling in favour of recruiting and processing large numbers of subjects. This is a mistaken strategy which leads inevitably to error, because investigators are seeking a primarily statistical, rather than clinically or scientifically relevant, notion of exactness. We are now in a position to describe a clinical research strategy which offers many advantages over a megatrial-led approach. Research should be planned with an awareness that the validity and applicability of estimates is more important than their numerical precision, and that this requires both an unselected denominator population database of all incident cases, and maximally controlled randomized trials and other studies. The Population-Adjusted Clinical Epidemiology (PACE) strategy is suggested as exemplifying the twin principles of clinically useful research: rigorous science and representative epidemiology.
Article
Full-text available
There appears to be a broad consensus that estrogen is a cause of breast cancer. Proof of cause and effect in clinical medicine requires a different approach for an epidemiological exposure (a 'mosaic' approach) than for an infectious agent suspected of causing a particular disease (a 'chain of evidence' approach). This paper discusses the differences between these two approaches in determining the relationship between a risk factor and a disease, and assesses the strength of the data linking estrogen with breast cancer. Analysis of existing data, including findings from the Women's Health Initiative, finds that all nine of the criteria necessary for confirming the epidemiological strength of a risk factor are not met in the case of estrogen, raising serious questions about the validity of this widespread assumption.
Article
Despite their prestige, megatrials are founded upon a methodological error. This is the assumption that randomization of very large numbers of subjects can compensate for deliberately reduced levels of experimental control, but there is no trade-off between size and rigour. Randomized trials are not a 'gold standard' because no method is intrinsically valid-there are good and bad trials. Interpretation of megatrials is always difficult and requires considerable clinical and scientific knowledge. Three fundamental parameters should be considered when evaluating the applicability of a trial to clinical practice: rigour of design; representativeness of the trial population; and homogeneity of the recruited subjects.