ArticlePDF Available

Critical analysis: a vital element in healthcare research

  • Independent researcher


Critical analysis questions literature quality through positive and negative critique. The paper guides students and novice researchers on objective critical thinking and writing for assignment excellence and publication acceptance respectively, and helps clinicians evaluate healthcare and behavioural literature thus taking better decisions for the patient. The article touches the hierarchy of study designs and the critical appraisal principles of causality, reliability, validity and execution including statistical issues. Moreover, it looks at other aspects like title appropriateness, standardised English writing style, data presentation, referencing quality and extraneous factors such as competing interests. Objective measurements should also be critically evaluated. Even a review paper of other peer-reviewed reviews has to be critically evaluated. Creating debate between authors is recommended. Triangulation and reflexivity are important for qualitative research rigour. Issues of originality versus repeatability and ethical aspects including risk assessment and sample size justification are appropriately covered. Critical evaluation questions what the research has contributed to society. An element of scepticism is essential for critical thinking. Critical analysis should first be applied to one’s own work by going through a set of ask-yourself-questions. Keywords: behavioural research; critical analysis; critical appraisal; critical evaluation; critical thinking; critical writing; critique; healthcare research.
nt. J. Behavioural and Healthcare Research, Vol. 5, Nos. 1/2, 2015
Copyright © 2015 Inderscience Enterprises Ltd.
Critical analysis: a vital element in healthcare
Charles Micallef
Ministry of Health,
15, Merchants Street, Valletta, VLT 1171, Malta
Kunsill Malti għall-iSport (Malta Sports Council),
Spinelli Street, Gzira, GZR 1712, Malta
Abstract: Critical analysis questions literature quality through positive and
negative critique. The paper guides students and novice researchers on
objective critical thinking and writing for assignment excellence and
publication acceptance respectively, and helps clinicians evaluate healthcare
and behavioural literature thus taking better decisions for the patient. The
article touches the hierarchy of study designs and the critical appraisal
principles of causality, reliability, validity and execution including statistical
issues. Moreover, it looks at other aspects like title appropriateness,
standardised English writing style, data presentation, referencing quality and
extraneous factors such as competing interests. Objective measurements should
also be critically evaluated. Even a review paper of other peer-reviewed
reviews has to be critically evaluated. Creating debate between authors is
recommended. Triangulation and reflexivity are important for qualitative
research rigour. Issues of originality versus repeatability and ethical aspects
including risk assessment and
sample size justification
are appropriately
covered. Critical evaluation questions what the research has contributed to
society. An element of scepticism is essential for critical thinking. Critical
analysis should first be applied to one’s own work by going through a set of
Keywords: behavioural research; critical analysis; critical appraisal; critical
evaluation; critical thinking; critical writing; critique; healthcare research.
Reference to this paper should be made as follows: Micallef, C. (2015)
‘Critical analysis: a vital element in healthcare research’, Int. J. Behavioural
and Healthcare Research, Vol. 5, Nos. 1/2, pp.104–123.
Biographical notes: Charles Micallef graduated in Pharmacy in 1991 from the
University of Malta. He specialised in physical activity and public health at
Staffordshire University. Prior to enrolling for the Masters, he was the lead
author of ‘Assessing the capabilities of 11-year-olds for three types of basic
physical activities’. Within a year after presenting his dissertation on Zumba
exercise for weight loss in 2013, he published as a sole author: ‘The
effectiveness of an eight-week Zumba program for weight reduction in a group
of Maltese overweight and obese women’, ‘Associations of weight loss in
relation to age and body mass index in a group of Maltese overweight and
obese women during an eight-week Zumba programme’, and ‘Community
development as a possible approach for the management of diabetes mellitus
focusing on physical activity lifestyle changes: a model proposed for Maltese
people with diabetes.’ The author voluntarily supervises students’ dissertations
and reviews papers for other journals.
Critical analysis: a vital element in healthcare research 105
1 Introduction
The paper discusses the assessment of healthcare literature quality through critical
analysis or critique. This entails more than simply negative judgement. High standard
academic critical writing is when one uses reasons and evidence to perform a fair and
sometimes detailed assessment in order to support his/her standpoint. Therefore, critical
analysis is more than just stating the strengths (merits) and weaknesses (limitations) of
your study findings; it also needs to be applied to the literature review in the introduction
or literature review section of your paper or thesis. In fact, the word, ‘review’ means, a
critical appraisal of any piece of work. In other words, literature review and research in
general, are expected to be more than merely descriptions of other researchers’ findings.
When one shows that he/she is able to think critically and objectively about an issue
and to present a well-constructed argument to support a point of view, the possibility of
having his/her work accepted for publication on a scientific journal is high, even if there
are no positive findings to report. A famous quote by Albert Szent-Györgyi (1893–1986)
reads, “research is to see what everybody else has seen and to think what nobody else has
thought”. Furthermore, the award of higher grades in assignments, including theses,
could also demand the application of criticism in both positive and negative forms.
However, the main scope of this article is more than merely providing guidance to
healthcare students, novice researchers, clinicians and other healthcare professionals in
obtaining high grades in assignments or in having papers accepted for publication. A
wise, busy clinician searching for evidence-based healthcare solutions for his/her patients
may go straight for systematic reviews or for reports by legitimate, international
organisations that have already evaluated and summarised the relative studies, but what if
such evidence is lacking? This article should then prove useful in helping clinicians
arrive to better conclusions for their patients when only one or few primary studies
(original articles) are available.
A discredited study may however, appear with no identification of its weaknesses and
may mislead even informed critics. It is also important to know that once papers enter the
electronic literature, there they remain and there is no way they can be retracted if found
to be misleading or suffering from serious flaws that have not been highlighted by the
authors. So, the busy practitioner needs to be informed.
Even press statements issued by organisations should not automatically be taken as
authoritative. One should find time to read the full report because an international body
may base its conclusions on any available evidences fed into it by the participating
Apart from the usual critical appraisal issues found in most textbooks, the paper
attempts to help the reader consider other features of academic writing and reviewing that
are normally not taken into account as being subject to critical evaluation. These could
also lead to adverse effects on preventive and curative healthcare if they are not evaluated
with a critical eye. For example, a busy clinician with poor knowledge of critical analysis
may not have time to read a 5,000-word article, and if the abstract does not adequately
and clearly summarise the findings, he/she may be tempted to base a decision on a catchy
title if unaware that titles can be misleading. Extraneous factors such as competing
interests are among the important aspects to be considered before accepting any research
proposal and findings. In addition to students, novice researchers and healthcare
professionals, overall this paper should interest academic supervisors and examiners from
106 C. Micalle
various faculties (particularly those related to healthcare and behavioural sciences), ethics
committee and dissertation board members, and journal editors and reviewers.
The way this paper is structured and presented is somewhat unique because it tries to
touch every possible aspect of critical analysis even though in any particular case you
would not be utilising its full potential. Although a logical pattern was used with one
subheading and corresponding section leading to the next whenever possible, each
section can be read and understood independently from the other. Some section
overlapping was unavoidable.
2 Common instances of critique applications, starting with the title
The introduction of any essay, dissertation or paper should be evaluative and critical of
the studies which have a particular bearing on your own assignment or research (Stewart
and Sampson, 2012). For example, you may think that the authors failed to identify some
limitations due to certain threats to the study’s internal validity. It may also be possible to
critically comment upon the suitability of the study design, the adequacy of the sample
size, the data collection process and so on.
Critical analysis actually starts with the title of the paper. Was the title a good
description for what was implemented or simply a sensational title to catch the readers’
attention as in newspaper headlines? The title should adequately capture the variables and
population under investigation (Polit and Hungler, 1998). For example, if a study on the
evaluation of a particular weight loss program on a selected, small group of obese
participants had to bear the following title, ‘the effectiveness of a ten-week dietary and
exercise intervention in reducing excess body weight in Maltese obese women’, it could
lead to a labelling issue because such a misleading title would give the impression that it
was a national (large-scale) program or that the sample was representative of the target
A title could also misguide the reader into thinking that there was a degree of
causality and that the findings were consistent (replicated several times) as for example,
‘vitamin X protects against breast cancer.’ If evidence to support causation was poor such
as when an association is identified through correlational research and the study had not
been previously performed, a more appropriate title could be, ‘study shows that vitamin
X is linked with breast cancer prevention’ or ‘relationship between vitamin X and breast
cancer among ….’ The difficulty of interpreting such findings stems from the fact that in
the real world, behaviours, states and characteristics are interrelated in complex ways. If
cause and effect is however suspected, one should apply the Bradford Hill’s criteria for
determining causation (University of South Alabama, n.d.).
The next thing an examiner or reviewer probably looks at is the standard of scientific
English used; whether it complies with the specified writing style. Grammatically
incorrect English, especially if it also lacks a coordinated flow of text, could give a
feeling that the paper is not going in any particular direction. Using vague (imprecise)
statements like, “until the last quarter of 2013, most of our patients received heparin” is
also not recommended in scientific writing.
Critical analysis: a vital element in healthcare research 107
3 Appraising the evidence helps put the right findings into practice
Researchers should be very cautious in the interpretation of their findings or the results of
other authors. Probability terms like, ‘it is likely’ or ‘unlikely’, and other tentative terms
should be used when appropriate. Jumping into premature conclusions can have serious
repercussions on healthcare.
To illustrate what it means to put findings into practice let us consider a study on
Ebola transmission. What seems to be reported by some authorities as positive findings
resulting from studies on non-human primates, whereby the virus, under the specified
experimental conditions, was found to be non-transmissible via an airborne route
(Alimonti et al., 2014), should still not be translated as directly applying to humans. Even
if the scientific community manages to find healthy volunteers for experimental research
on Ebola transmission (which is most unlikely!), it could take legitimate organisations
quite some time in order to evaluate the replicated and constant results of several studies
on human subjects before issuing any public statements that Ebola cannot be transmitted
in humans through coughing and sneezing. Still, one can question why the subjects were
not studied in real life circumstances. This does not necessarily invalidate the studies
themselves but may cast doubt on the applicability of the research findings into practice.
Knowledge of critical analysis therefore helps researchers thoroughly evaluate the
available literature and their own works in the best ways possible. This implies that
eventually they should be able to implement the right findings in a relatively safe
patient-centred approach. As a general rule, especially when facing any healthcare threat,
it is critical to make conclusions and take decisions based on facts.
4 Critical writing as a skill
There is no need to feel hesitant about criticising published work. Of course, a negative
critical evaluation should ideally be balanced with a positive one. Therefore, do not
refrain from also stating the study’s strengths.
Critical writing is a skill that does not come automatically when writing your doctoral
thesis. One has to start practising it in preferably all assignments at masters’ level and to
some extent at undergraduate level as well. Whether you are faced with an original
research article (primary study) or a review article (secondary study), follow this simple
advice when trying to evaluate it. First imagine that the paper was written by your
adversary who is competing with you for the same post. What would you do? You would
probably make sure that no irregularities in his/her paper remain unnoticed. However, it
is still important to maintain sensitivity when handling negative comments. Tentative
(cautious) language is an important feature of academic writing. For example, it is more
appropriate to write: “as …, there appears to be an error in this statement”, instead of, “as
…, this statement is not true.”
Then switch off to being dependent on this author for a promotion or job
qualification. You would now probably make every effort to highlight each and every
positive aspect of his/her paper and give praise accordingly. The accent here is on the
word ‘accordingly’. Make sure you do not exaggerate; too much positive criticism with
lengthy sweet phrases will also spoil your work. Just point out the strengths without
unnecessary adjectives and justify why you consider them as strengths. For example, “as
108 C. Micalle
self-reported data generally underestimated the prevalence of obesity (World Health
Organization, 2007), weight measurements were recorded objectively by the researcher.”
When it comes to appraising your own work in hope of identifying all its weaknesses
and other areas that are subject to further improvements, there is no better way to learn
how to revise it than to perform critical analysis on other authors’ works. You will find
that your critical eye works much better when it is focused on their works than it does
when it is focused on your assignment or manuscript. You can be more objective when
looking at someone else’s work and you can see more easily what has gone wrong in
their papers and how you could improve their reports. When you practise these skills on
someone else’s paper, you become more proficient at practicing them on your work
(Institute for Writing and Rhetoric, 2014).
An assignment which is too descriptive would probably be very boring to read.
Adding quality critique in your introduction or literature review and discussion sections
spices your work and makes the reader/reviewer/examiner want to continue reading your
A common mistake by most students is when making grandiose claims to support
their conclusions such as when they do not duly consider the threats to their research’s
internal validity. Another example is when a correlational finding (association) is
confused with causation. The generalisability of conclusions (external validity) offers
another opportunity for critical appraisal. These are just a few aspects of critical appraisal
of the literature. Although they are all covered in standard textbooks (Crombie, 1996;
Gosall and Gosall, 2012; Greenhalgh, 2014; Straus et al., 2011), journal articles
(Greenhalgh, 1997a, 1997b; Greenhalgh and Taylor, 1997) and web-resources (Cardiff
University, 2013; McMaster University, 2008; University of South Australia, 2014), the
ability to use these tools is not always straightforward and does not come overnight – it
needs practising. This is more felt when no specific set of standard evaluation questions
are available as will be seen in the next section.
5 Even a review of reviews deserves critique
Let us consider the open-access paper by Ding and Gebel (2012), ‘Built environment,
physical activity, and obesity: what have we learned from reviewing the literature?’ Take
note of the overall structure of the paper and how they clearly explained their search
techniques plus the beautiful presentation of the tables. Observe how the authors
conducted the critical analysis and gave their recommendations. They reported the
weaknesses of other review articles and supported their standpoint as follows: “… few
reviews assessed the methodological quality of the primary studies, and some did not
report critical information, such as data sources, the time frame for the literature search,
or the total number of studies included. Future reviews should adopt a more systematic
review methodology to assist in the synthesis of the evidence.”
As the authors assessed review papers, at first glance one would expect them to use a
validated critical appraisal tool such as the Critical Appraisal Skills Programme (CASP)
(Stewart, 2010). Rightly so, as they were only interested in assessing how these review
articles delivered the relationships of the built environment with respect to physical
activity and obesity aspects (and not in the quality of the reviews per se), they had to
devise eight specific evaluation questions.
Critical analysis: a vital element in healthcare research 109
One would expect that being a double-blind peer-reviewed review paper of other
peer-reviewed review articles, this paper should be flawless. Not surprisingly, it has some
imperfections. The authors had a habit of using the personal term ‘we’ such as, “we
searched the literature for peer-reviewed review articles that were published in English
from January 1990 till July 2011.” A more appropriate scientific way of writing this
statement would be: “peer-reviewed review articles that were published in English
between January 1990 and July 2011 were searched.” The article also lacks a short,
general conclusion.
6 Create debate between authors
At times, even critique after critique could be dull to read. This could be overcome by
discreetly creating debate between authors of contradicting findings or opposing
opinions, though you would still need to take a position to support your argument. For
instance, the author of a particular study is, in your opinion, making over-reaching claims
because he/she did not investigate some necessary aspect of the study. Then through
further literature searching you find a paper that supports your thinking. Therefore, your
write-up may look something like this: “whereas author X (2011) was claiming that B
was the outcome of A, as argued by author Y (2013), it is still early to conclude that A
was causing B because the study did not have an appropriate control group.”
Ding and Gebel (2012) identified that review studies had to be more specific in the
reporting of their findings: “… almost half of the reviews either combined adults with
youth or did not specify target age groups.” However, the authors wanted to support their
beliefs by quoting other authors who had previously arrived to this conclusion; this
automatically engaged other researchers in the debate: “… to avoid misleading
conclusions, reviews should focus on one age group, or stratify studies by age (Ding
et al., 2011; Wong et al., 2011).”
7 Evidence-based healthcare: challenging study design rankings
No account on critical analysis is complete without a touch of evidence-based healthcare
which is widely accepted as the ideal practice for patients in order to receive the best
clinical management or intervention. With experience you would also learn how to
challenge what apparently looks as scientifically ideal.
Some epidemiologists and experimental researchers have a tendency to take the
hierarchy of study types and designs as bible. Overviews or secondary studies (systematic
reviews and meta-analyses) followed by randomised controlled trials (RCTs) have
traditionally been regarded as the best quality study types to assess evidence of
However, the ranking system of study types does not always apply smoothly in
practice. As Stewart (2010) pointed out, a well-designed cohort study may provide better
evidence than a badly conducted RCT. Every-Palmer and Howick (2014) explained how
a number of industry-funded randomised trials in pharmaceutical research have been
corrupted by vested interests involved in the choice of hypothesis tested, in the
manipulation of study design and in the selective reporting of such trials. The authors
110 C. Micalle
suggested that evidence-ranking schemes need to be modified to take industry bias into
When comparing two drugs, it could be that one drug had more therapeutic effects
than the other not because it was pharmacologically more potent but due to formulation
properties that adversely affected the distribution (dispersion) process of the seemingly
inferior drug. Therefore, a basic understanding of pharmacokinetics is also necessary
when reviewing medical literature. Researchers could use a comparator drug product with
formulation problems in order to favour the drug under investigation.
Many researchers opt for prevalence studies. Let us see what Ding and Gebel (2012)
had to say: “… evidence has come from cross-sectional studies, which cannot provide
strong support for causality.” … “More longitudinal studies are encouraged because they
account for temporal order.”
Uncontrolled experimental studies, case reports and case series are generally faster
and more convenient to perform than prevalence studies. They usually have in common a
before-after or repeated measures approach with no controls and as expected, rank poorly
in the hierarchy. There are some journals in which the authors’ guidelines specifically
state that studies with no controls would be rejected upon submission.
Controls are especially important when rigorously testing the effectiveness of drugs,
vaccines and other interventions. However, there can be circumstances when
uncontrolled approaches are justified as with the pre-test post-test single group study
regarding the effectiveness of a Zumba program on body composition (Micallef, 2014a).
In such behavioural science research involving subjects under free-living conditions,
blinding and placebo controls cannot be performed and it is practically impossible to
isolate program and control groups from each other in order to prevent social interaction
threats, such as compensatory rivalry and resentful demoralisation, from occurring.
Moreover, the researcher could never know what the subjects were doing in their private
Case reports and case series too should not be negatively labelled as long as they are
performed for the preliminary evaluation of novel therapies or when a controlled trial is
neither logistically feasible nor ethically justifiable.
Professional reviewers often use templates with validated checklists when critically
appraising studies. Each appraisal tool is usually specific to a particular study type and
although the questions vary for different studies, each reviewer normally has two main
questions to investigate:
What did the authors actually find?
Should their findings be trusted?
Reference was earlier made to the CASP which contains questions for assessing reviews
(see ‘even a review of reviews deserves critique’). Some scales have also been developed
to gauge the quality of studies such as the Jadad Scale which is used to assess the quality
of RCTs (Royal Australasian College of Surgeons, n.d.).
8 Targets for assessing evidence of effectiveness
When assessing the strength of evidence of effectiveness, critical appraisal should target
mostly the quality of study design and execution. Reliability, which is the extent to which
Critical analysis: a vital element in healthcare research 111
the study results can be replicated to obtain a constant result, and validity, are important
factors to be considered when assessing the design quality (Stewart, 2010). We have
already seen that a badly conducted (poor quality) RCT can become listed below a cohort
Validity, when applied to research tools, refers to how accurately they actually
measure what they are required to measure. If a questionnaire is expected to explore the
pharmacists’ views in dispensing antibiotics without prescriptions and only consists of
questions that examine their knowledge of pharmacology on antibiotics, it would not be
the right tool for the research’s aim. Internal validity is relevant when the resultant
differences are due only to the hypothesised effect whereas external validity refers to how
generalisable the results are to their target population (Stewart, 2010). The reader is
advised to acquaint him/herself well about the various threats to validity, types of bias
and confounding that could occur in research.
The study execution refers to factors related to the actual outcome measurements
including adequate frequency and duration of the intervention, instrumentation, data
analysis and interpretation of the results. Actual as can be, even objective measurements
of energy balance for example, need to be critically assessed. Although 37 obesity
researchers and experts reported that they should be used in preference to self-reported
measurements which rightly so, they criticised as too inaccurate (Dhurandhar et al.,
2015), it could be that instruments for energy cost studies such as heart rate telemetry and
open circuit spirometry would encumber the subjects’ physical activity movements, thus
lowering their energy expenditures (Micallef, 2014a).
One should also be careful not to condemn a new intervention which lacks sufficient
evidence of effectiveness. As Crawford et al. (2002) noted, the absence of evidence
should not be mistaken for the absence of effect.
A single study could represent reasonable evidence, but its strength of evidence
remains limited. A large number of studies constitute a stronger body of evidence
because several replications reduce the likelihood that the results of individual studies
could be caused by chance or due to bias.
9 Statistical issues including data presentation
A good understanding of statistical terms such as, probability or significance value (P),
confidence interval (CI), effect size (ES), t-test, analysis of variance (ANOVA), analysis
of covariance (ANCOVA), multiple linear regression, and epidemiological associations
like relative risk (RR), absolute risk (AR) and odds ratio (OR), is essential when
evaluating the study’s execution. Types 1 and 2 errors should be recognised when
interpreting the P. The latter depends on the CI around the measure. Qualitative research
involving categorical variables often include chi-squared (χ2) and logistic regression.
A critical evaluation of statistical findings may question the level of significance
chosen and why CI and ES have been excluded. Stewart (2010) advised that for critical
studies, such as treatment trials, statistical significance is best set at P < 0.01 instead of
< 0.05. However, it is important to know that statistically significant results are not
necessarily clinically (practically) significant. A reduction in the mean diastolic blood
pressure of a group of adults, from 110 to 100 mmHg, may have a P-value of < 0.0005
but would still be above the healthy level. Another way of testing hypotheses is through
112 C. Micalle
the CI. Moreover, when it comes to judging clinical significance for an intervention,
Sturmberg and Topolski (2014) recommended the calculation of the ES.
Sturmberg and Topolski (2014) also warned against the misrepresentation of findings
through statistics. Manipulation of the denominator could help the overselling of
seemingly superior therapeutic products. The example brought by the authors involved a
scenario when the reporting of the percentage of people dying from a condition initially
appeared very high (one person out of four) because the denominator included only four
people affected with the condition. However, after identifying eight more people with the
condition (now 12 in all) but still only one dying, suggested that the mortality rate went
down from 25% to 8.3% when in reality nothing had been gained by identifying more
affected people because the mortality in the whole community was constant.
Among other statistical fallacies that Sturmberg and Topolski (2014) continued to
highlight were two that cannot be ignored. Firstly, the randomisation process
underpinning the RCT aims to stratify subjects by a set of pre-defined characteristics and
assumes that people are predictable mechanistic entities when in reality the human
body behaves in complex adaptive ways. Thus, its behaviour to challenges is
non-deterministic. Then, there was the issue of relative versus absolute statistics.
Researchers may present results that are most likely to impress by reporting a reduction
in RR rather than using the true or AR. Differences between the intervention and control
arms of a study can be magnified if the relative difference between the two groups, rather
than the more meaningful absolute difference, is reported.
The student and novice researcher are inclined to leave the statistical calculations in
the hands of a statistician and then end up being obliged to include him/her as part of the
study’s authorship during the publication process. I could see statisticians featuring as
co-authors in a good range of studies that cover completely different topics. To openly
involve a statistician as part of a research team is acceptable but to claim that your
dissertation is all your own work is cheating.
A student with poor knowledge of statistics is easily noticed in as early as the
literature review (that is, before arriving at the data analysis stage) when in the process of
trying to critically evaluate the data gathered from various studies he/she simply states
whether sample sizes were sufficiently large or not and just quotes the statistical results
as presented in the literature without at least harmonising all the data into a standardised
manner for the sake of presenting clear comparisons. To judge the sample size and
simply give all drug therapeutic levels from various studies using one normally applied
unit of concentration and present them in a table form without going into further
statistical evaluation is the least thing one could do and yet they may even fail this
elementary task!
Finally, this section would be incomplete without a brief discussion on how data can
be presented. Tables and graphs accompanied by appropriate and concise text (captions),
and numbered accordingly, should serve as visual aids for the reader to rapidly
understand what you are trying to convey in the text. They also pose a problem in being
entities that easily catch the critical eye of the reader. Lengthy and complicated tables are
not recommended and ideally should be split into smaller ones whereas full tabulated
data may be included in appendices (Stewart and Sampson, 2012).
Figures usually take the form of various graphs like histograms, scatter-plots and pie
charts. Three-dimensional and other special effects can detract from easy and accurate
understanding (Stewart, 2010). Colour should only be used if essential. Healthcare
Critical analysis: a vital element in healthcare research 113
research papers, especially case reports and case series, also use photographs as figures.
In any case, high resolution figures should be produced.
10 Qualitative research and mixed methods approach
Although several researchers and doctors have traditionally been reluctant to go beyond
quantitative methods involving statistical figures, a good qualitative study could still
address a clinical problem. This could be done by: using a clearly formulated question,
using more than one research method (triangulation), and independently coding and
analysing the data by more than one researcher as a ‘quality control’ to confirm that they
both assigned the same interpretations (Greenhalgh and Taylor, 1997). For example, “the
focus groups’ data of people with diabetes was crosschecked with the information
gathered through the postal survey and the hospital records.” Quasi-statistical procedures
should also be used to validate the findings (Polit and Hungler, 1998).
Respondent validation or member checking secures construct validity. For example,
to ensure that the researcher fully understood the views and perceptions that emerged
following a focus group theme discussion, a verification process was applied whereby the
accuracy of the findings was checked with the participants.
The context of the phenomenon under investigation should be adequately described.
Furthermore, the report should give the reader a clear picture of the social world of the
people under study (Polit and Hungler, 1998).
As a general rule, the interpretative researcher should apply reflexivity so that readers
could make greater sense of the analysis presented. Maintaining a sceptical approach to
the evidence acquired is important. For example, were you told what you wanted to hear
(Carter and Henderson, 2005)? The unconscious nodding by the researcher and the
regular ‘yeah’ and ‘right’ replies could be interpreted as agreeing with the subject and
thus act as an element of bias or false information (Gratton and Jones, 2010). According
to Willig (2013), personal reflexivity is when the researcher reflects upon ways in which
his/her own values, experiences, interests and beliefs could have shaped the research and
how the research may have affected him/herself whereas in epistemological reflexivity
the researcher should think about the implications of the assumptions by engaging with
questions such as:
How has the research question defined and limited what can be found?
How could the research have been investigated differently?
Further food for thought: what if the researcher acting as an ‘observation instrument’ got
better by time at making the observations, resulting in instrumentation threat? Was the
empirical evidence obtained through observations verified with other observers? This is
important because there is a tendency to believe what you see. Whether the study
deliberately recruited a reasonable number of individuals to truly fit the bill, and whether
data collection continued until saturation occurred (that is, when new information was not
supposed to provide further insight), plus whether it was analysed through a systematic
process (for example, through content analysis), are other factors to be considered.
Furthermore, if grounded theory was used for the construction of a new theory, was it
used appropriately?
114 C. Micalle
In either case, whether it is content analysis or grounded theory, Merriam (2009)
emphasised that qualitative data collection and analysis should be undertaken
concurrently. Otherwise, it is not only overwhelming (imagine that all data collection is
done and you are trying to deal with a pile of interview transcript papers and field notes
from your on-site observations plus a box-file full of relevant documents and literature),
but also jeopardises the potential for more rich data and valuable findings.
As with quantitative research, apart from replication, are the findings of qualitative
research transferable to other clinical settings? One of the commonest criticisms is when
they pertain only to the limited setting in which they were obtained (Greenhalgh and
Taylor, 1997).
There is also the issue of mixed methods research. Although quantitative methods
alone can be insufficient for the evaluation of certain interventions, mixed methods on the
other hand, may produce contradictory results. Nevertheless, pluralistic evaluation
normally accumulates evidence from a variety of different sources. In any case, whether
it is primarily quantitative research or qualitative research, the question of whether the
study could have been strengthened by mixed methods often crops up.
11 Construct validity
Normally, construct validity is applied to social sciences where subjectivity is involved,
but according to Trochim and Donnelly (2008), it is not limited to psychological
measures. They showed that construct validity is also part of the intervention. For
example, was the program a true weight loss program, or the results only reflected a
peculiar version of the program that was only held in a single place and at a particular
12 Originality versus repeatability: issues of practicality and creativity
There is no doubt that a researcher who is creative in his/her work earns high respect.
There are journals that instruct authors to specifically include a subheading saying what
the study has added to the existing knowledge on the subject.
Although the strength of quantitative research lies in its reliability, admittedly,
nobody likes to read the same steps of previous researchers. However, there are instances
when repeatability also earns credit. For example, if a second study derived the same
results as the first study but conducted the research either after a relevant campaign, or
through a different methodology, then in both scenarios, there would be a degree of
originality: either by finding whether the campaign was effective or not, or by verifying
the original results through a different pathway. Strictly speaking, it would be wrong to
say that a study was replicated if it did not follow exactly the same footsteps of its
The issue of repeatability could be taken one step forward. A repeated study could be
scientifically sound (well designed with sufficiently robust methodology) and
Critical analysis: a vital element in healthcare research 115
academically justified (replicated preliminary data which was then no longer
inconclusive) but could lack national and even global interest. For example, a country
was experiencing an influx of immigrants suspected of carrying for the first time a
contagious disease for which no cure had existed. Preliminary screening tests revealed
that 20% were infected with a lethal pathogen. Further thorough clinical investigations
confirmed this figure. Both findings were published and acknowledged by the scientific
research community.
However, as the authorities were unprepared to deal with a sudden outbreak of such
magnitude, the affected country was in dire need for effective solutions and public health
experts would have done a more useful job if they were to evaluate possible interventions
to control the disease than if they had to stay repeating practically the same prevalence
studies ad nauseam. Immaterial of whether the findings can be generalised to the whole
population or not, one could therefore criticise a study as having little or no practical
value to the host country especially if it was conducted in a state-owned university or was
financed by the nation or other sources, where the funds could have been used for more
fruitful research that would benefit society. On the other hand, whereas researchers
should avoid unnecessary replications they should also not leap several steps ahead when
there is an insecure foundation (Polit and Hungler, 1998).
Even when it comes to research originality, a novel model created for the sake of just
being different from conventional therapy without having value (importance or
usefulness) does not pertain to creativity (DeBono, 2006). An unusual model for people
with diabetes, based on community development, can be put into practice (Micallef,
2014b) and therefore earns credit for its creativity, but someone who comes up with a
proposal of having triangular room doors instead of rectangular ones would not be
accredited for creative thinking unless it can be shown to possess value.
13 Was the right target population selected?
A study was conducted to gather as much value data as possible on the symptoms (if any)
of cervical carcinoma. Immaterial of whether the researchers were looking for survey
respondents or clinical subjects, it would be of little scientific or medical value to select
all age groups of women, apart from also being considered as unethical conduct. The
population of interest should be women between 25–69 years who are at risk of this type
of cancer (Bonita et al., 2006).
Researchers know that university students could be relatively easy subjects for
research in the sense that they would mostly comply with the research instructions and
are unlikely to drop-off from the program. Hence, it is common to see studies with the
recruitment criteria for young and apparently healthy volunteers. Such studies may only
provide poor advancement for clinical treatments.
Unless reasonably justified, gender inequalities could be another means of accusation.
A national study on sexual health aspects encountered by adults could be incomplete if it
only randomly selected clear-cut genders (males and females) without employing
stratified sampling for gender-variant people (trans-genders). On the other hand, a study
on reaction time in a representative sample of schoolboys had excluded girls from
116 C. Micalle
participating. However, in the limitations’ section, the authors admitted that a single sex
study had to be conducted due to the religious culture of the country and so it was fine
from that sense although obviously one cannot infer their results as applying to all
14 Ethical issues, risk assessments and sample size justification
When assessing for any breach of ethical standards, you should first see whether that
particular study was ethically approved at both institutional and national levels according
to the Helsinki Declaration of 1975, and subsequent revisions. Then, after thoroughly
reading the paper, without hesitation, express your moral views that should not be limited
to the usual written informed consent of the volunteers and the preservation of
confidentiality. For example, if any risks to subjects were predicted, such as cardiac
events during vigorous exercise in adults, certain control measures are expected to be
taken to address them. Such precautions could include the presentation of medical
clearance certificates and age capping recruitment measures. A risk assessment resource
with tables such as the one provided by Staffordshire University (1998) can be of useful
guidance here.
In the previous section, we saw that researchers who tend, without justifications, to
grab hold of whatever category of human resources they can easily lay their hands upon
for their research, or who intentionally leave out specific subgroups, could be seen as
performing scientifically and morally wrong procedures.
It is also unethical to undertake a study with unnecessarily large sample of subjects; it
could be a waste of time, money and human resources. On the other hand, whereas large
differences can be detected in small samples, small differences can only be identified in
large samples (Sturmberg and Topolski, 2014). Authors should justify the sample size
that allowed them to gain reliable insights, through a priori calculations whenever
possible (Stewart, 2010). They should also take into account any expected undesirable
outcomes such as dropout and response rates.
The randomisation into intervention and control groups of patients suffering from
serious illnesses is also subject to ethical controversy such as when treatment to control
subjects is deprived or when one group receives a treatment that has already shown to be
inferior to the treatment of the other group. One method used by some pharmaceutical
companies to get the results they want from clinical trials is to compare drugs under study
with treatments known to be inferior (Smith, 2005). The vested interests in industry-
funded trials have been discussed by Every-Palmer and Howick (2014) in
‘evidence-based healthcare: challenging study design rankings’.
A word of advice could here be useful. Although both are subject to scrutiny, do not
mix randomisation or random assignment with random selection or random sampling
(also called probability sampling) which aims to make a sample more representative of
the population (generalisability).
Animals too have rights. Some journals request authors who experimented with
animals to declare that they have respected EU Directive 2010/63/EU.
The issue of ethics can be further stretched to include critical evaluation of authorship
rights. A researcher may start getting numerous publications as ‘honorary author’ because
he/she is head of a branch or a respected scholar.
Critical analysis: a vital element in healthcare research 117
15 Quality and quantity of references
The reference list should, if possible, also be analysed. A primary study is generally
expected to contain around 15 to 30 references whereas a secondary study can be allowed
to have more than 50 references. Check whether the references were appropriate with the
text. Were current references used? Distinguish between secondary (high value) research
such as systematic reviews and secondary (non-recommended) references when authors
rely on somebody else’s version of a given study.
An organisation funding a research may hide a study with negative outcomes and
some journals also prefer to publish studies which demonstrate positive findings. Stewart
(2010) described these potentially dangerous practices as publication bias. Under
‘evidence-based healthcare: challenging study design rankings’, selective publication was
shown to have thwarted the potential of evidence-based medicine for improving
healthcare. It is for these reasons that dissertations and other unpublished works should
not be ignored during literature searching as ‘grey literature’ could still be useful.
Comprehensive searching should ideally also cover studies in languages other than
Personal communications usually have little bearing. They should only be mentioned
in the text.
16 Critique your own study
This article started with the need to acknowledge the strengths and weaknesses of your
study findings. It is imperative to critique your own study before you openly criticise
other people’s work. It is therefore advisable to criticise without concealment your own
study rather than allow others to do it for you, especially if the reader is your examiner or
a reviewer deciding whether to accept your article for publication or not. When you
highlight and discuss your own limitations and avoid jumping into grandiose conclusions,
you are not being naïve; on the contrary you are showing that you are a mature, honest
researcher and deserve due recognition of your hard work even if you have no positive
results to reveal.
17 Extraneous factors
One should also try to analyse factors that are unrelated to the study per se. For example,
was there any conflict of interest? Drug companies often sponsor researchers to evaluate
their medicinal products and ruling out competing interests could therefore be hard. It has
been briefly shown that a number of industry-funded studies can be associated with
certain flaws (see ‘evidence-based healthcare: challenging study design rankings’). As
Every-Palmer and Howick (2014) suggested, more investment in independent research is
required. In addition to financial gain, the welfare of the patients or the validity of a
research may be influenced by other secondary interests such as personal rivalry.
Try to delve into the journal’s history, editorial board and the instructions for authors.
How long has the journal been established and what is its acceptance rate (if available)?
118 C. Micalle
Some journals boast to have a rejection rate of 90%. If metrics like impact factor and h-
index are available, take note of them.
Was the article or book peer-reviewed through a double-blind reviewing process?
Apart from journal articles, the paper could be a chapter in an edited book or it could be a
whole dissertation published in the form of a textbook. For blind reviewing, any form of
personal identification including acknowledgements and conflicts of interest that could
somehow affect the reviewers’ judgements should be submitted separately and not in the
same file containing the manuscript. Furthermore, the two reviewers have to be chosen
independently so as not to influence each other. The term ‘double-blind’ is also used in
connection with RCTs when neither the subjects nor those who administer the treatment
know who is in the experimental or control group.
If it is an open-access article (which carried a publication fee), could it be that the
lead or corresponding author was asked to decide on whether to publish the paper via
open-access method or through restricted procedure before the editor-in-chief evaluated
its suitability for the journal? Ideally, authors should be able to decide on whether to go
for the open-access option or not only after acceptance for publication in order to ensure
the decision had no influence on the acceptance process.
Was the journal’s editorial office (
especially the editor in chief and associate editors
related to the main author’s academic institution? Do not let your evaluation skills get
influenced by the authors’ academic profiles and affiliations. Rightly so, several journals
do not publish the authors’ qualifications and in most papers the corresponding author
also has to quantify what each author has contributed to the study. We have seen in
‘ethical issues, risk assessments and sample size justification’ that some authors are being
added with other authors simply because they are important people.
18 The advantages of critique: a four-fold function
It could be that external validity was not an issue as when dealing with laboratory-based
experiments. Moreover, in view of not exceeding the word count, you would probably be
selective and focus only in analysing the threats to internal validity and the quality of
execution in your mini literature review (introduction) and discussion sections of your
paper. So, in practice you may only directly utilise a small fraction of critical appraisal.
This however, does not mean that you need not inform yourself about the full spectrum of
critical analysis.
The benefits of critiquing are four-fold. At student level, it gives you the power to
express your critical judgements over the works of other authors in your literature review
section or chapter. This judgemental power is also utilised if you are reviewing a paper
for journal publication, a research proposal, or an assignment (be it a small essay or a
dissertation). Secondly, it helps you acknowledge your own study limitations in the
discussion section or chapter before others highlight them for you. Then, when you
further master critical thinking it also automatically helps you perfectionise your whole
work by looking carefully at every detail as it is understood that you would not want
others to criticise it. Under ‘critical writing as a skill’ it was explained how the critical
eye becomes cultivated for excellence the more you practise critical analysis on other
authors’ works. Furthermore, when a researcher or a healthcare professional is confident
in evaluating the literature including his/her own work, the findings can be implemented
in the best possible way for the benefit of the patient.
Critical analysis: a vital element in healthcare research 119
19 Ask-yourself-questions
The following questions can help you aim for perfection in your work. These could assist
in improving your work (whenever possible), in finding and accepting your limitations, in
avoiding as much as possible that you receive negative criticism and in increasing the
chances for higher academic marks or for publication acceptance. They can also help in
reviewing or performing critical analysis on someone else’s work.
1 Is the title of the paper truly scientific and concise?
2 Does the abstract clearly summarise the main work and highlight the key findings to
encourage the reader to read the whole assignment or paper?
3 If it is a review paper, was it systematically performed by attempting to cover all
studies, published and unpublished, according to an established system that would
enable other persons to follow the same process and reach similar conclusions?
4 If you are dealing with a meta-analysis, does it systematically pool the results of two
or more clinical trials to obtain an overall answer to a specific question?
5 If the research is qualitative, did you follow accepted qualitative design and reporting
6 Was critical analysis liberally applied to the literature review section or chapter?
7 Did the research question(s) and aim(s) arise naturally from the evidence presented
in the introduction or literature review?
8 Have you selected the most appropriate design?
9 Is the methodology sufficiently robust with adequate control measures as much as
10 Was the sample selected from the appropriate population and sufficiently large to
show any hypothesised changes plus in case of generalisability, was it randomly
11 Did you perform the right statistical test(s)?
12 Have you fully adhered to ethical standards?
13 Were all the study aims or objectives assessed?
14 Have you double-checked all the calculations including data in tables and graphs and
seen that readers can quickly grasp the important characteristics of the data?
15 Does the discussion reflect all the results including negative (undesirable) findings
and relate them to your own ideas and possibly, to the findings of other researchers?
16 Did you cover the strengths and weaknesses of your study and suggest what might be
done in more ideal settings?
17 Does the conclusion provide effective closure for the paper by indicating the possible
future implications of the study and by preferably leaving the reader satisfied that
everything was scientifically explained?
120 C. Micalle
18 Are there sufficient, current and quality references in the reference list?
19 Have you checked that the references match your in-text citations and that they all
conform to the latest edition of referencing style used or as specifically demanded by
the journal?
20 Overall, is the report written in an objective, unambiguous style with correct
grammar using tentative language, precise statements and logically well-presented
through a coordinated flow of information?
20 Conclusions and clarifications
It is hoped that the reader realised that any published literature is not infallible. As
Greenhalgh (1997a) admitted, some published papers cannot be used to inform practice
and should belong to the bin. We saw that ‘even a review of reviews deserves critique’
and under ‘targets for assessing evidence of effectiveness’, a report co-signed by several
experts (Dhurandhar et al., 2015) was not immune to critical analysis. Perhaps the flaws
related to industry-funded research are most likely to be remembered but as
Every-Palmer and Howick (2014) admitted, all humans have biases and it would be naïve
to think that publicly-funded research is free from bias. In spite of all this, the essay in no
way undermines the standard of most published papers.
The healthcare researcher should have at least one paper on a scientific or academic
journal. Needless to say, publishing a single-author paper and a multi-author paper as
lead author certainly adds more credit to your profile. Having an article accepted for
publication after being sieved through a rigorous critiquing process is probably more
prestigious than several descriptive and unchallenged works. Succinctness is essential for
academic writing and indeed presenting a paper that does not exceed the stipulated
word-count limit could already be a challenge in itself. Even the academic quality of
publications pertaining to conference proceedings is usually not as high as that of
peer-reviewed papers. Although a journal paper may not be perfect, the peer review
system ensures a degree of quality control.
Researchers should adopt a somewhat sceptical attitude to even their own studies and
this goes beyond the positivist’s approach of deductive reasoning during application of
the null hypotheses in quantitative research. We have seen that a sceptical approach is
also essential during the reflexivity stage of qualitative research. The researcher should
always be critical to his/her own research techniques and that of others in search for
scientific perfection and the truth. So, in today’s culture, do not take it so badly if they
label you as pessimist or associate you with Saint Thomas. You may still recall that there
is a tendency to believe what you see (see ‘qualitative research and mixed methods
approach’). For doubting Thomas, it was not enough to receive the news of the risen
Jesus from his trusted friends, the disciples, and to see with his own eyes, but he also
wanted to touch. Thomas’ finger can be regarded as a rudimentary scientific instrument
(Dixon, 2013).
Although it is undisputed that research inferences in evidence-based healthcare are
normally carried out through objective measurements, the use of instruments,
sophisticated as can be, is still prone to instrumentation threats and measurement errors.
We have seen that instruments such as heart rate monitors can also limit the subjects’
Critical analysis: a vital element in healthcare research 121
In this account, the author tried to convey important ways of carrying out critical
analysis for successful research with appropriate cautions when necessary. The examples
given were only used to illustrate the text and as there are practically unlimited
possibilities of critique, the reader is advised that this account is by no means an
exhaustive checklist for critical analysis.
Admittedly, the article is sometimes controversial in nature. This stems from the fact
that the author attempted to explore most of the spectrum of critical analysis that goes
beyond what is normally covered under the framework of critical appraisal. Discussing
critical analysis is in itself a hot topic because all humans have a tendency to err and
researchers are no exceptions. So, it is understood that certain parts of the article may not
at all be pleasing to the reader if they remind him/her of something! Polit and Hungler
(1998) remarked that an evaluation of whether the most appropriate data collection
procedure was used could involve a degree of subjectiveness. They added that issues
concerning the appropriateness of various research strategies can be topics about which
even experts disagree.
As can be seen, the author’s main objective was not to go into detail on what we
already know from standard textbooks on critical appraisal but to help the reader focus on
other aspects which are often not considered for critical analysis. However, the paper still
encourages the use of standard appraising techniques for evaluating the methodological
quality of literature. Therefore, overall it should prove to be a useful judgemental tool for
healthcare (and to a lesser extent, behavioural research) students, professionals,
researchers and academic staff in general.
Here are some further clarifications:
Although the words ‘analysis’ and ‘appraisal’ can be used interchangeably, in most
of the text the term ‘critical analysis’ was used in preference to ‘critical appraisal’,
because unlike the latter, the former covers every aspect of a paper for its good
qualities and flaws – starting from the title and finishing off with the reference list.
Being a commentary article does not mean that this paper was exempted from blind
As expected from an article of this type, its style is colloquial at times. Indeed, one
noticeable difference is that the use of personal terms was permitted.
Conflicts of interest
The author has no competing interests to declare.
I am indebted to my ex-tutors at Staffordshire University, namely, Prof. Antony Stewart
and Mrs. June Sampson. From day one of my masters’ course in physical activity and
public health, they had instilled into me the good and useful habit of applying critical
analysis in practically every assignment. Further acknowledgements go to the
Kunsill Malti għall-iSport (Malta Sports Council, KMS) and the Ministry of Health for
allowing me sufficient time to do the necessary research and preparation of this paper.
122 C. Micalle
The technical support of Mr. William Galea, a KMS Executive Officer, is also
Alimonti, J., Leung, A., Jones, S., Gren, J., Qiu, X., Fernando, L., Balcewich, B., Wong, G.,
Ströher, U., Grolla, A., Strong, J. and Kobinger, G. (2014) Evaluation of Transmission Risks
Associated with In Vivo Replication of Several High Containment Pathogens in a Biosafety
Level 4 Laboratory, Scientific Reports, Vol. 4, Article No. 5824.
Bonita, R., Beaglehole, R. and Kjellström, T. (2006) Basic Epidemiology, 2nd ed., World Health
Organization, Geneva.
Cardiff University (2013) Critical Appraisal of Healthcare Literature [online] (accessed 5 July 2015).
Carter, S. and Henderson, L. (2005) ‘Approaches to qualitative data collection in social science’, in
Bowling, A. and Ebrahim, S. (Eds.): Handbook of Health Research Methods: Investigation,
Measurement and Analysis, pp.215–229, Open University Press, Berkshire.
Crawford, M.J., Rutter, D., Manley, C., Weaver, T., Bhui, K., Fulop, N. and Tyrer P. (2002)
‘Systematic review of involving patients in the planning and development of healthcare’,
British Medical Journal, Vol. 325, No. 7375, pp.1263–1265.
Crombie, I.K. (1996) The Pocket Guide to Critical Appraisal, BMJ Publishing Group, London.
DeBono, E. (2006) Expert on Creative Thinking [online] (accessed 5 July 2015).
Dhurandhar, N.V., Schoeller, D., Brown, A.W., Heymsfield, S.B., Thomas, D., Sørensen, T.I.,
Speakman, J.R., Jeansonne, M., Allison, D.B. and Energy Balance Measurement Working
Group (2015) ‘Energy balance measurement: when something is not better than nothing’,
International Journal of Obesity, Vol. 39, No. 7, pp.1109–1113.
Ding, D. and Gebel, K. (2012) ‘Built environment, physical activity, and obesity: what have we
learned from reviewing the literature?’, Health and Place, Vol. 18, No. 1, pp.100–105.
Dixon, T. (2013) Doubting Thomas: A Patron Saint for Scientists? [online] (accessed 5 July 2015).
Every-Palmer, S. and Howick, J. (2014) ‘How evidence-based medicine is failing due to biased
trials and selective publication’, Journal of Evaluation in Clinical Practice, Vol. 20, No. 6,
Gosall, N.K. and Gosall, G.S. (2012) The Doctor’s Guide to Critical Appraisal, 3rd ed., Pastest
Ltd., Cheshire.
Gratton, C. and Jones, I. (2010) Research Methods for Sport Studies, 2nd ed., Routledge, Oxford.
Greenhalgh, T. (1997a) ‘How to read a paper: getting your bearings (deciding what the paper is
about)’, British Medical Journal, Vol. 315, No. 7102, pp.243–246.
Greenhalgh, T. (1997b) ‘How to read a paper: assessing the methodological quality of published
papers’, British Medical Journal, Vol. 315, No. 7103, pp.305–308.
Greenhalgh, T. (2014) How to Read a Paper: The Basics of Evidence-based Medicine, 5th ed.,
Wiley Blackwell, Chichester.
Greenhalgh, T. and Taylor, R. (1997) ‘How to read a paper: papers that go beyond numbers
(qualitative research)’, British Medical Journal, Vol. 315, No. 7110, pp.740–743.
Institute for Writing and Rhetoric (2014) Revision: Cultivating a Critical Eye [online]
cultivating-critical-eye (accessed 5 July 2015).
McMaster University (2008) Critical Appraisal [online] (accessed 5 July 2015).
Critical analysis: a vital element in healthcare research 123
Merriam, S.B. (2009) Qualitative Research: A Guide to Design and Implementation, Jossey-Bass,
San Francisco, CA.
Micallef, C. (2014a) ‘The effectiveness of an eight-week Zumba programme for weight reduction
in a group of Maltese overweight and obese women’, Sport Sciences for Health, Vol. 10, No.
3, pp.211–217.
Micallef, C. (2014b) ‘Community development as a possible approach for the management of
diabetes mellitus focusing on physical activity lifestyle changes: a model proposed for Maltese
people with diabetes’, International Journal of Community Development, Vol. 2, No. 2,
Polit, D.F. and Hungler, B.P. (1998) Nursing Research: Principles and Methods, 6th ed.,
Lippincott, Philadelphia, PA.
Royal Australasian College of Surgeons (n.d.) Jadad Score [online] (accessed 5 July 2015).
Smith, R. (2005) Medical Journals are an Extension of the Marketing Arm
of Pharmaceutical Companies [online]
(accessed 5 July 2015).
Staffordshire University (1998) Risk Assessments (General) Policy and Guidance [online] (accessed 5 July 2015).
Stewart, A. (2010) Basic Statistics and Epidemiology: A Practical Guide, 3rd ed., Radcliffe
Publishing, Oxford.
Stewart, A. and Sampson, J. (2012) Dissertation Handbook, Staffordshire University,
Straus, S.E., Richardson, W.S., Glasziou, P. and Haynes, R.B. (2011) Evidence-based Medicine:
How to Practice and Teach It, 4th ed., Churchill Livingstone, London.
Sturmberg, J. and Topolski, S. (2014) ‘For every complex problem, there is an answer that is clear,
simple and wrong’, Journal of Evaluation in Clinical Practice, Vol. 20, No. 6, pp.1017–1025.
Trochim, W.M.K. and Donnelly, J.P. (2008) The Research Methods Knowledge Base, 3rd ed.,
Atomic Dog, Mason, OH.
University of South Alabama (n.d.) How do Epidemiologists Determine Causality? [online]
(accessed 5 July 2015).
University of South Australia (2014) Critical Appraisal Tools [online]
sansom/research-concentrations/allied-health-evidence/resources/cat/ (accessed 5 July 2015).
Willig, C. (2013) Introducing Qualitative Research in Psychology, 3rd ed., Open University Press,
World Health Organization (2007) The Challenge of Obesity in the WHO European Region and the
Strategies for Response, WHO Regional Office for Europe, Copenhagen.
... Intervention: [20], [21], [23], [24], [25], [28] The researchers use a wide variety of robust techniques to make methodology more effective and reliable like blinding, randomization, restriction matching etc. Any shortcoming in intervention in methodology most likely leads to collection data or results which that do not reflect the truth. ...
... The extraneous variation can influence research findings, therefore methods to control relevant confounding variables should be applied [33]. During critical evaluation one should look for information's regarding [21], [28]. ...
... 9. DISCUSSION AND CONCLUSIONS: [10], [12], [14], [21], [25], [28], [40] Discussion should be given in a scientific and rational way. All variables or parameters of the study should be discussed separately in separate paragraphs. ...
Full-text available
Background: Critical appraisal of research paper is a fundamental skill in modern medical practice, which is skills-set and developed throughout the professional career. The professional experience facilitates this and through integration with clinical experience and patient preference, permits high quality evidence-based medicine practice in patient care. These skills to be mastered not only by academic medical professionals but also by the clinicians involved in clinical practice. Objective: To provide a simple and robust method for assessing the trustworthiness of a research paper and its value in clinical practice. Methodology: Through detailed literature search, All essential sections and subsection mandatory for a research paper were identified followed by the necessary steps or information required in each section or questions which may arise or needs to addressed were identified. The important questions or steps which are integral in assessing the reliability and validity of a research are gathered during critical review of a research paper. Results: Out of 128 full text articles, 49 full-text articles containing robust and pertinent information as per objective were short listed for review. Conclusion: Critical appraisal of a research paper or project is a fundamental skill in modern medical practice for assessing the worth of clinical research and in providing a guideline of its relevance to the profession.
... For example, a national study on sexual health aspects encountered by adults could be incomplete if it only randomly selected clear-cut genders without employing stratified sampling for gender-variant individuals. 36 It is also not correct to label all gender-dysphoric and transgender people as delusional. Many, especially as they grow older, realise that sex cannot change, but they may still keep believing that their emotional unhappiness can be resolved if they can physically and socially impersonate the opposite sex. ...
Full-text available
Introduction: The paper deals with gender dysphoria (gender identity) and helps the reader understand that people are not born in wrong bodies thus linking the understanding with unnatural behaviour. Methods: The second part compares transgenderism with a psychiatric condition: clinical lycanthropy. We see a case of someone believing he was a bird and how he was cured. Results: The author highlights similarities between the two conditions. In both scenarios there could be delusions and the individuals are unhappy with their bodies. The unshakeable belief in drastically changing one’s body is not normal and should receive psychological or psychiatric treatment. Conclusion: A number of bioethical statements are presented. The author reminds healthcare workers to adhere to the medical principle of ‘first do no harm’ when considering gender affirmative treatment and advises that political decisions should not be based on just palliative approaches. It is concluded that gender remains binary. The transgender or third gender is a socio-political construct.
... In addition, the absence of a control group, the lack of randomization and the lack of medication and dietary monitoring hampered the possibility of controlling for possible bias and confounders. Nevertheless, it should be taken into account that in studies including individuals under free-living conditions, it might be unfeasible to isolate participants from a control group or to monitor subject's behavior out of the intervention [48]. It is also important to consider the pragmatic design of this community-based program, as it was intended to reach as many individuals as possible in a real-life setting and thus the ethical reservations of leaving subjects out of the intervention by requiring a control group. ...
Full-text available
The purpose of this study was to analyze the changes which occurred after a supervised aerobic exercise program in the blood pressure (BP), cardiorespiratory fitness and body mass in overweight individuals. Sixty-one individuals (65.6 ± 6.5 years, 31.16 ± 4.76 kg/m²) performed an exercise program consisting of 1 h sessions of aerobic exercises, three times/week for 6 months. Resting systolic and diastolic BP, cardiorespiratory fitness [6-min walk test (6MWT)] and body mass were measured three times; at baseline (T0), after 3 months (T1) and after 6 months (T2). Results showed significant (p < 0.05) changes in systolic BP, diastolic BP and the 6MWT at T2. Small and statistically no significant changes were observed in body mass. Greater significant changes were observed in BP measures and the 6MWT at T1 compared to measurements at T2. A significant relationship between changes in resting systolic BP and diastolic BP (r = 0.47) was found but not between changes in other variables. It could be concluded that a 6-month exercise program based on aerobic exercise has beneficial effects on cardiovascular risk factors regardless of body mass loss. These findings highlight the importance of lifestyle interventions focusing on increasing physical activity rather than focusing on body mass loss alone.
Full-text available
Introduction: Research is an integral part of medical field including medical education. Medical report writing is a specialised technique in the field of clinical research. Narrative and reflective way of medical report writing is an individual perception. Narration helps in expressing the situation while reflection of situation helps in reciprocating it well. Aim: To assess perception regarding narrative and reflective writing in fellowship candidates. Materials and Methods: Forty candidates enrolled for fellowship course were assessed in a prospective study for perception regarding narrative and reflective way of medical report writing of their allotted projects. Assessment was done on the basis of questionnaire based score (Likerts scale). Software used in the analysis was SPSS 22.0 version to assess the percentage of various score against a specific question. Results: Maximum participants were in the age group of 36- 40 years (45%). A total of 80% candidates were of the opinion that narrative and reflective way of research writing is applicable in the current research writing; 50% percent candidates strongly agreed that reflective writing is better in analysing the situation; 80% candidates experienced that it is useful in gaining selfknowledge and understanding the topic better. Conclusion: Narrative and reflective writing is an effective way to describe the research methodology and its results. Also, reflective writing helps in framing future applications of the research.
Full-text available
Treating diabetes mellitus is very expensive and with 10% prevalence, the Maltese healthcare can face serious problems. Despite the evidence that regular exercise lowers blood glucose, few persons with diabetes participated in physical activity due to fear of hypoglycaemia and other barriers. Conventional management of diabetes imposes lifestyle changes and favours pharmaceutical administration. Implementation of grassroots initiatives through health needs assessment leading to community development is an alternative strategy. The main purposes of this article are, to present clear information on community development as an alternative to conventional diabetes interventions, and to serve as a model to stimulate interest amongst the authorities to use this approach for diabetes management. In a fictitious diabetes community scenario, based on available, limited literature, lack of physical activity was identified as the main need to improve health. Through a ‘bottom-up’ approach based on empowerment and participation, the sedentary community gradually progressed to active subgroups that eventually became less dependent on anti-diabetic medications and car-use. There was the possibility of a sports and recreational strategy. Health promoters were leading players, followed by local councils and other stakeholders. After publishing physical activity guidelines, holding regular recreational activities and celebrations, the community development was sustainable, cost-effective and environmental friendly. Project evaluation was crucial. Funding was governmental and partly sponsored by health-compatible enterprises. Albeit time-consuming, community development can be the most ethical and effective form of health promotion for diabetes healthcare. This approach offers a challenge to the traditional medical model.
Full-text available
Energy intake (EI) and physical activity energy expenditure (PAEE) are key modifiable determinants of energy balance, traditionally assessed by self-report despite its repeated demonstration of considerable inaccuracies. We argue here that it is time to move from the common view that self-reports of EI and PAEE are imperfect, but nevertheless deserving of use, to a view commensurate with the evidence that self-reports of EI and PAEE are so poor that they are wholly unacceptable for scientific research on EI and PAEE. While new strategies for objectively determining energy balance are in their infancy, it is unacceptable to use decidedly inaccurate instruments, which may misguide health care policies, future research, and clinical judgment. The scientific and medical communities should discontinue reliance on self-reported EI and PAEE. Researchers and sponsors should develop objective measures of energy balance.International Journal of Obesity accepted article preview online, 13 November 2014. doi:10.1038/ijo.2014.199.
Full-text available
Containment level 4 (CL4) laboratories studying biosafety level 4 viruses are under strict regulations to conduct nonhuman primate (NHP) studies in compliance of both animal welfare and biosafety requirements. NHPs housed in open-barred cages raise concerns about cross-contamination between animals, and accidental exposure of personnel to infectious materials. To address these concerns, two NHP experiments were performed. One examined the simultaneous infection of 6 groups of NHPs with 6 different viruses (Machupo, Junin, Rift Valley Fever, Crimean-Congo Hemorrhagic Fever, Nipah and Hendra viruses). Washing personnel between handling each NHP group, floor to ceiling biobubble with HEPA filter, and plexiglass between cages were employed for partial primary containment. The second experiment employed no primary containment around open barred cages with Ebola virus infected NHPs 0.3 meters from naïve NHPs. Viral antigen-specific ELISAs, qRT-PCR and TCID50 infectious assays were utilized to determine antibody levels and viral loads. No transmission of virus to neighbouring NHPs was observed suggesting limited containment protocols are sufficient for multi-viral CL4 experiments within one room. The results support the concept that Ebola virus infection is self-contained in NHPs infected intramuscularly, at least in the present experimental conditions, and is not transmitted to naïve NHPs via an airborne route.
Full-text available
Purpose: Zumba dance exercises are promoted for body weight reduction. However, scientific research on its potential as a weight loss tool is scant. Only a few energy expenditure studies on small samples of relatively young and apparently healthy volunteers were performed, and the energy cost of Zumba has not been translated into actual weight reduction. The study investigated the before–after effects of a Zumba programme on the weight and body mass index (BMI) of 36 females, mean age 34.25 ± 8.50 years and mean BMI 32.98 ± 5.32 kg/m2. Methods: The intervention involved 16 hourly Zumba sessions held twice weekly over 8 weeks. The exercises comprised a mixture of merengue, salsa, reggaeton and bachata with warm-up and cool-down activities. They were of low-impact style, but were maintained at vigorous intensity that was still bearable for the obese subjects. An important requirement was that the programme had to be taken as an additional part of their lives and not as a means of altering their nutrition and physical activity habits. Results: The subjects had statistically significant decreases and large effects for weight and BMI: 2.13 kg, t(35) = 13.77, P\0.0005, d = 2.30, and 0.83 kg/m2, t(35) = 13.02, P\0.0005, d = 2.17, respectively. Conclusions: Good programme adherence and other strengths were attributed to this study. However, there could have been factors like history threats that affected the changes. Further studies are therefore required to establish the effectiveness of Zumba as an exercise modality for weight loss. Keywords: Body mass index � Body weight � Obesity � Overweight � Weight loss � Zumba
Full-text available
Evidence-based medicine (EBM) was announced in the early 1990s as a ‘new paradigm’ for improving patient care. Yet there is currently little evidence that EBM has achieved its aim. Since its introduction, health care costs have increased while there remains a lack of high-quality evidence suggesting EBM has resulted in substantial population-level health gains. In this paper we suggest that EBM's potential for improving patients' health care has been thwarted by bias in the choice of hypotheses tested, manipulation of study design and selective publication. Evidence for these flaws is clearest in industry-funded studies. We argue EBM's indiscriminate acceptance of industry-generated ‘evidence’ is akin to letting politicians count their own votes. Given that most intervention studies are industry funded, this is a serious problem for the overall evidence base. Clinical decisions based on such evidence are likely to be misinformed, with patients given less effective, harmful or more expensive treatments. More investment in independent research is urgently required. Independent bodies, informed democratically, need to set research priorities. We also propose that evidence rating schemes are formally modified so research with conflict of interest bias is explicitly downgraded in value.
Full-text available
This essay examines the notions of knowledge, truth and certainty as they apply to medical research and patient care. The human body does not behave in mechanistic but rather complex adaptive ways; thus, its behaviour to challenges is non-deterministic. This insight has important ramifications for experimental studies in health care and their statistical interrogation that are described in detail. Four implications are highlighted: one, there is an urgent need to develop a greater awareness of uncertainties and how to respond to them in clinical practice, namely, what is important and what is not in the context of this patient; two, there is an equally urgent need for health professionals to understand some basic statistical terms and their meanings, specifically absolute risk, its reciprocal, numbers needed to treat and its inverse, index of therapeutic impotence, as well as seeking out the effect size of an intervention rather than blindly accepting P-values; three, there is an urgent need to accurately present the known in comprehensible ways through the use of visual tools; and four, there is a need to overcome the perception, that errors of commission are less troublesome than errors of omission as neither's consequences are predictable.
Full-text available
Introduction Before changing your practice in the light of a published research paper, you should decide whether the methods used were valid. This article considers five essential questions that should form the basis of your decision. Question 1: Was the study original? Only a tiny proportion of medical research breaks entirely new ground, and an equally tiny proportion repeats exactly the steps of previous workers. The vast majority of research studies will tell us, at best, that a particular hypothesis is slightly more or less likely to be correct than it was before we added our piece to the wider jigsaw. Hence, it may be perfectly valid to do a study which is, on the face of it, “unoriginal.” Indeed, the whole science of meta-analysis depends on the literature containing more than one study that has addressed a question in much the same way. The practical question to ask, then, about a new piece of research is not “Has anyone ever done a similar study?” but “Does this new research add to the literature in any way?” For example: Is this study bigger, continued for longer, or otherwise more substantial than the previous one(s)?Is the methodology of this study any more rigorous (in particular, does it address any specific methodological criticisms of previous studies)?Will the numerical results of this study add significantly to a meta-analysis of previous studies?Is the population that was studied different in any way (has the study looked at different ages, sex, or ethnic groups than previous studies)?Is the clinical issue addressed of sufficient importance, and is there sufficient doubt in the minds of the public or key decision makers, to make new evidence “politically” desirable even when it is not strictly scientifically necessary? Question 2: Whom is the study about? Before assuming that the results of a paper are applicable to your own practice, ask yourself the following questions: How were the subjects recruited? If you wanted to do a questionnaire survey of the views of users of the hospital casualty department, you could recruit respondents by advertising in the local newspaper. However, this method would be a good example of recruitment bias since the sample you obtain would be skewed in favour of users who were highly motivated and liked to read newspapers. You would, of course, be better to issue a questionnaire to every user (or to a 1 in 10 sample of users) who turned up on a particular day.Who was included in the study? Many trials in Britain and North America routinely exclude patients with coexisting illness, those who do not speak English, those taking certain other medication, and those who are illiterate. This approach may be scientifically “clean,” but since clinical trial results will be used to guide practice in relation to wider patient groups it is not necessarily logical.1 The results of pharmacokinetic studies of new drugs in 23 year old healthy male volunteers will clearly not be applicable to the average elderly woman.Who was excluded from the study? For example, a randomised controlled trial may be restricted to patients with moderate or severe forms of a disease such as heart failure—a policy which could lead to false conclusions about the treatment of mild heart failure. This has important practical implications when clinical trials performed on hospital outpatients are used to dictate “best practice” in primary care, where the spectrum of disease is generally milder.Were the subjects studied in “real life” circumstances? For example, were they admitted to hospital purely for observation? Did they receive lengthy and detailed explanations of the potential benefits of the intervention? Were they given the telephone number of a key research worker? Did the company that funded the research provide new equipment which would not be available to the ordinary clinician? These factors would not necessarily invalidate the study itself, but they may cast doubt on the applicability of its findings to your own practice. Question 3: Was the design of the study sensible? Although the terminology of research trial design can be forbidding, much of what is grandly termed “critical appraisal” is plain common sense. I usually start with two fundamental questions: What specific intervention or other manoeuvre was being considered, and what was it being compared with? It is tempting to take published statements at face value, but remember that authors frequently misrepresent (usually subconsciously rather than deliberately) what they actually did, and they overestimate its originality and potential importance. The examples in the box use hypothetical statements, but they are all based on similar mistakes seen in print.What outcome was measured, and how? If you had an incurable disease for which a pharmaceutical company claimed to have produced a new wonder drug, you would measure the efficacy of the drug in terms of whether it made you live longer (and, perhaps, whether life was worth living given your condition and any side effects of the medication). You would not be too interested in the levels of some obscure enzyme in your blood which the manufacturer assured you were a reliable indicator of your chances of survival. The use of such surrogate endpoints is discussed in a later article in this series.2 View this table:View PopupView InlineExamples of problematic descriptions in the methods section of a paper RETURN TO TEXT View larger version:In a new windowDownload as PowerPoint SlidePETER BROWN The measurement of symptomatic effects (such as pain), functional effects (mobility), psychological effects (anxiety), or social effects (inconvenience) of an intervention is fraught with even more problems. You should always look for evidence in the paper that the outcome measure has been objectively validated—that is, that someone has confirmed that the scale of anxiety, pain, and so on used in this study measures what it purports to measure, and that changes in this outcome measure adequately reflect changes in the status of the patient. Remember that what is important in the eyes of the doctor may not be valued so highly by the patient, and vice versa.3 Question 4: Was systematic bias avoided or minimised? Systematic bias is defined as anything that erroneously influences the conclusions about groups and distorts comparisons.4 Whether the design of a study is a randomised controlled trial, a non-randomised comparative trial, a cohort study, or a case-control study, the aim should be for the groups being compared to be as similar as possible except for the particular difference being examined. They should, as far as possible, receive the same explanations, have the same contacts with health professionals, and be assessed the same number of times by using the same outcome measures. Different study designs call for different steps to reduce systematic bias: Randomised controlled trials In a randomised controlled trial, systematic bias is (in theory) avoided by selecting a sample of participants from a particular population and allocating them randomly to the different groups. Figure 2 summarises sources of bias to check for. View larger version:In a new windowDownload as PowerPoint SlideFig 1 Sources of bias to check for in a randomised controlled trial Non-randomised controlled clinical trials I recently chaired a seminar in which a multidisciplinary group of students from the medical, nursing, pharmacy, and allied professions were presenting the results of several in house research studies. All but one of the studies presented were of comparative, but non-randomised, design—that is, one group of patients (say, hospital outpatients with asthma) had received one intervention (say, an educational leaflet) while another group (say, patients attending GP surgeries with asthma) had received another intervention (say, group educational sessions). I was surprised how many of the presenters believed that their study was, or was equivalent to, a randomised controlled trial. In other words, these commendably enthusiastic and committed young researchers were blind to the most obvious bias of all: they were comparing two groups which had inherent, self selected differences even before the intervention was applied (as well as having all the additional potential sources of bias of randomised controlled trials). As a general rule, if the paper you are looking at is a non-randomised controlled clinical trial, you must use your common sense to decide if the baseline differences between the intervention and control groups are likely to have been so great as to invalidate any differences ascribed to the effects of the intervention. This is, in fact, almost always the case.5 6 Cohort studies The selection of a comparable control group is one of the most difficult decisions facing the authors of an observational (cohort or case-control) study. Few, if any, cohort studies, for example, succeed in identifying two groups of subjects who are equal in age, sex mix, socioeconomic status, presence of coexisting illness, and so on, with the single difference being their exposure to the agent being studied. In practice, much of the “controlling” in cohort studies occurs at the analysis stage, where complex statistical adjustment is made for baseline differences in key variables. Unless this is done adequately, statistical tests of probability and confidence intervals will be dangerously misleading.7 This problem is illustrated by the various cohort studies on the risks and benefits of alcohol, which have consistently found a “J shaped” relation between alcohol intake and mortality. The best outcome (in terms of premature death) lies with the cohort who are moderate drinkers.8 The question of whether “teetotallers” (a group that includes people who have been ordered to give up alcohol on health grounds, health faddists, religious fundamentalists, and liars, as well as those who are in all other respects comparable with the group of moderate drinkers) have a genuinely increased risk of heart disease, or whether the J shape can be explained by confounding factors, has occupied epidemiologists for years.8 Case-control studies In case-control studies (in which the experiences of individuals with and without a particular disease are analysed retrospectively to identify putative causative events), the process that is most open to bias is not the assessment of outcome, but the diagnosis of “caseness” and the decision as to when the individual became a case. A good example of this occurred a few years ago when a legal action was brought against the manufacturers of the whooping cough (pertussis) vaccine, which was alleged to have caused neurological damage in a number of infants.9 In the court hearing, the judge ruled that misclassification of three brain damaged infants as “cases” rather than controls led to the overestimation of the harm attributable to whooping cough vaccine by a factor of three.9 Question 5: Was assessment “blind”? Even the most rigorous attempt to achieve a comparable control group will be wasted effort if the people who assess outcome (for example, those who judge whether someone is still clinically in heart failure, or who say whether an x ray is “improved” from last time) know which group the patient they are assessing was allocated to. If, for example, I knew that a patient had been randomised to an active drug to lower blood pressure rather than to a placebo, I might be more likely to recheck a reading which was surprisingly high. This is an example of performance bias, which, along with other pitfalls for the unblinded assessor, is listed in figure 2. Question 6: Were preliminary statistical questions dealt with? Three important numbers can often be found in the methods section of a paper: the size of the sample; the duration of follow up; and the completeness of follow up. Sample size In the words of statistician Douglas Altman, a trial should be big enough to have a high chance of detecting, as statistically significant, a worthwhile effect if it exists, and thus to be reasonably sure that no benefit exists if it is not found in the trial.10 To calculate sample size, the clinician must decide two things. The first is what level of difference between the two groups would constitute a clinically significant effect. Note that this may not be the same as a statistically significant effect. You could administer a new drug which lowered blood pressure by around 10 mm Hg, and the effect would be a significant lowering of the chances of developing stroke (odds of less than 1 in 20 that the reduced incidence occurred by chance).11 However, in some patients, this may correspond to a clinical reduction in risk of only 1 in 850 patient years12—a difference which many patients would classify as not worth the effort of taking the tablets. Secondly, the clinician must decide the mean and the standard deviation of the principal outcome variable. Using a statistical nomogram,10 the authors can then, before the trial begins, work out how large a sample they will need in order to have a moderate, high, or very high chance of detecting a true difference between the groups—the power of the study. It is common for studies to stipulate a power of between 80% and 90%. Underpowered studies are ubiquitous, usually because the authors found it harder than they anticipated to recruit their subjects. Such studies typically lead to a type II or ß error—the erroneous conclusion that an intervention has no effect. (In contrast, the rarer type I or α error is the conclusion that a difference is significant when in fact it is due to sampling error.) Duration of follow up Even if the sample size was adequate, a study must continue long enough for the effect of the intervention to be reflected in the outcome variable. A study looking at the effect of a new painkiller on the degree of postoperative pain may only need a follow up period of 48 hours. On the other hand, in a study of the effect of nutritional supplementation in the preschool years on final adult height, follow up should be measured in decades. Completeness of follow up Subjects who withdraw from (“drop out of”) research studies are less likely to have taken their tablets as directed, more likely to have missed their interim checkups, and more likely to have experienced side effects when taking medication, than those who do not withdraw.13 The reasons why patients withdraw from clinical trials include the following: Incorrect entry of patient into trial (that is, researcher discovers during the trial that the patient should not have been randomised in the first place because he or she did not fulfil the entry criteria);Suspected adverse reaction to the trial drug. Note that the “adverse reaction” rate in the intervention group should always be compared with that in patients given placebo. Inert tablets bring people out in a rash surprisingly frequently;Loss of patient motivation;Withdrawal by clinician for clinical reasons (such as concurrent illness or pregnancy);Loss to follow up (patient moves away, etc);Death. View larger version:In a new windowDownload as PowerPoint SlideAre these results credible? BMJ/PREUSS/SOUTHAMPTON UNIVERSITY TRUST Simply ignoring everyone who has withdrawn from a clinical trial will bias the results, usually in favour of the intervention. It is, therefore, standard practice to analyse the results of comparative studies on an intention to treat basis.14 This means that all data on patients originally allocated to the intervention arm of the study—including those who withdrew before the trial finished, those who did not take their tablets, and even those who subsequently received the control intervention for whatever reason—should be analysed along with data on the patients who followed the protocol throughout. Conversely, withdrawals from the placebo arm of the study should be analysed with those who faithfully took their placebo. In a few situations, intention to treat analysis is not used. The most common is the efficacy analysis, which is to explain the effects of the intervention itself, and is therefore of the treatment actually received. But even if the subjects in an efficacy analysis are part of a randomised controlled trial, for the purposes of the analysis they effectively constitute a cohort study. Summary points The first essential question to ask about the methods section of a published paper is: was the study original? The second is: whom is the study about? Thirdly, was the design of the study sensible? Fourthly, was systematic bias avoided or minimised? Finally, was the study large enough, and continued for long enough, to make the results credible? The articles in this series are excerpts from How to read a paper: the basics of evidence based medicine. The book includes chapters on searching the literature and implementing evidence based findings. It can be ordered from the BMJ Bookshop: tel 0171 383 6185/6245; fax 0171 383 6662. Price £13.95 UK members, £14.95 non-members. References1.↵Bero LA, Rennie D. Influences on the quality of published drug studies. Int J Health Technology Assessment 1996;12:209–37.2.↵Greenhalgh T. Papers that report drug trials. In: How to read a paper: the basics of evidence based medicine. London: BMJ Publishing Group, 1997:87–96.3.↵Dunning M, Needham G. But will it work, doctor? Report of conference held in Northampton, 22-23 May 1996. London: King's Fund, 1997.4.↵Rose G, Barker DJP. Epidemiology for the uninitiated. 3rd ed. London: BMJ Publishing Group, 1994.5.↵Chalmers TC, Celano P, Sacks HS, Smith H. Bias in treatment assignment in controlled clinical trials. N Engl J Med 1983;309:1358–61.6.↵Colditz GA, Miller JA, Mosteller JF. How study design affects outcome in comparisons of therapy. I. Medical. Statistics in Medicine 1989;8:441–54.7.↵Brennan P, Croft P. Interpreting the results of observational research: chance is not such a fine thing. BMJ 1994;309:727–30.8.↵Maclure M. Demonstration of deductive meta-analysis: alcohol intake and risk of myocardial infarction. Epidemiol Rev 1993;15:328–51.9.↵Bowie C. Lessons from the pertussis vaccine trial. Lancet 1990;335:397–9.OpenUrlCrossRefMedlineWeb of Science10.↵Altman D. Practical statistics for medical research. London: Chapman and Hall, 1991:456.11.↵Medical Research Council Working Party. MRC trial of mild hypertension: principal results. BMJ 1985;291:97–104.12.↵MacMahon S, Rogers A. The effects of antihypertensive treatment on vascular disease: re-appraisal of the evidence in 1993. J Vascular Med Biol 1993;4:265–71.13.↵Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical epidemiology—a basic science for clinical medicine. London: Little, Brown, 1991:19–49.14.↵Stewart LA, Parmar MKB. Bias in the analysis and reporting of randomized controlled trials. Int J Health Technology Assessment 1996;12:264–75.15.Chalmers I, Altman DGKnipschild P. Some examples of systematic reviews. In: Chalmers I, Altman DG, eds. Systematic reviews. London: BMJ Publishing Group, 1995:9–16.