ArticlePDF AvailableLiterature Review

Intervention research: Appraising study designs, interpreting findings and creating research in clinical practice


Abstract and Figures

Speech-language pathologists (SLPs) are increasingly required to read, interpret and create evidence regarding the effectiveness of interventions. This requires a good understanding of the strengths and weaknesses of different intervention study designs. This paper aims to take readers through a range of designs commonly used in speech-language pathology, working from those with the least to most experimental control, with a particular focus on how the more robust designs avoid some of the limitations of weaker designs. It then discusses the factors other than research design which need to be considered when deciding whether or not to implement an intervention in clinical practice. The final section offers some tips and advice on carrying out research in clinical practice, with the hope that more SLPs will become actively involved in creating intervention research.
Content may be subject to copyright.
Appraising, interpreting and creating intervention research
Intervention research: appraising study designs, interpreting findings
and creating research in clinical practice
Susan H. Ebbels1,2
1Moor House School & College, Surrey, UK
2Division of Psychology and Language Sciences, University College London, UK
Correspondence concerning this article should be addressed to Susan Ebbels, Moor
House School & College, Mill Lane, Hurst Green, Oxted, Surrey, RH8 9AQ, UK.
Keywords: evidence based practice, intervention research, study design
Appraising, interpreting and creating intervention research
Speech-language pathologists (SLPs) are increasingly required to read, interpret and create evidence
regarding the effectiveness of interventions. This requires a good understanding of the strengths and
weaknesses of different intervention study designs. This paper aims to take readers through a range
of designs commonly used in speech-language pathology, working from those with the least to most
experimental control, with a particular focus on how the more robust designs avoid some of the
limitations of weaker designs. It then discusses the factors other than research design which need to
be considered when deciding whether or not to implement an intervention in clinical practice. The
final section offers some tips and advice on carrying out research in clinical practice, with the hope
that more SLPs will become actively involved in creating intervention research.
Appraising, interpreting and creating intervention research
Evidence-based practice is key to providing the best possible service for our clients. In order to
deliver evidence-based practice, clinicians need to integrate individual clinical expertise and their
   the best available clinical evidence (Sackett, Rosenberg, Gray, Haynes, &
Richardson, 1996). Therefore, it is crucial that clinicians are able to identify the best available
research evidence by reading the literature and applying a sound knowledge of the strengths and
limitations of different intervention study designs.
In some areas of speech-language pathology practice, however, the intervention evidence is very
limited. Thus, speech-language pathologists (SLPs) may need to use evidence that is only partially
related to their clinical situation and to place more reliance on their clinical expertise while waiting
for more relevant evidence to emerge. An alternative solution is for SLPs to create their own
evidence. SLPs who investigate the effectiveness of interventions delivered in their particular setting
and with their particular client group create evidence which is highly relevant for that situation and
client group, while also increasing their own ability and confidence in making evidence-based
decisions. This can lead to more effective intervention and hence improved outcomes for their
Practising SLPs may be anxious about carrying out research and feel this is best left to those working
in universities who have more research skills and time to devote to research. While this may be the
case, intervention studies can be very time-consuming and costly due to the labour-intensive
process of administering repeated assessments and providing intervention. Thus, limited numbers of
intervention studies are likely to be funded. However, practising SLPs are already carrying out
assessments and intervention, so collaborations between practising SLPs and universities could
significantly reduce the costs of intervention studies, as the intervention is already being provided,
Appraising, interpreting and creating intervention research
funded from elsewhere. Such collaborations therefore have the advantage of creating intervention
research which is highly clinically relevant and in a cost effective manner, while drawing on the
research expertise of university-employed staff.
Combining theoretical and research experience with clinical experience can benefit intervention
studies as well as increasing the skills and knowledge of those involved. Snowling and Hulme (2011)
argue for  linking theory with practice, whereby theory leads to the formulation of
possible interventions, which are then evaluated in intervention studies with strong designs, the
results of which are used to inform and refine theory. I would add that clinical experience also has a
role to play and can contribute to the formulation of theoretically well-founded interventions.
Clinicians will often have insights into the practicalities of delivering interventions that could help
improve the effectiveness of those interventions, for example, how long and frequent sessions
should be, how often the focus of activities needs to change and other tips
for motivating clients and potentially boosting learning. When the intervention has been evaluated,
the results can extend the  could be created
where both theory and clinical experience help to formulate interventions and the results of those
interventions inform and improve both theory and the clinical experience of those involved.
Given the value of SLPs being involved in intervention research, both as consumers (reading and
understanding the literature and applying relevant findings to their clinical work) and increasingly as
(co-)creators of intervention research, it is important they have sufficient knowledge of intervention
research design. This paper aims to provide SLPs with some of that knowledge by discussing the
strengths and limitations of intervention study designs commonly used in speech-language
pathology with the aim that SLPs will be better able to critically appraise studies they read and also
that some will use this information to help them design and carry out research studies within their
clinical practice which are as robust as possible.
Appraising, interpreting and creating intervention research
My intervention research experience and knowledge is primarily with children with Developmental
Language Disorder (DLD) and therefore, many of the examples of studies I provide will relate to this
client group. However, this paper aims to be relevant to those working in a range of client groups
and settings.
Intervention study design
The design of an intervention study is fundamental to its robustness and reliability and needs to be
planned carefully in advance. When carrying out intervention studies in clinical settings, many
factors are at play, only some of which relate directly to the intervention itself. Thus, in order to
separate the effects of the intervention from the effects of other non-specific factors, we need
studies which control for as many of these as possible. Some designs are much more robust than
others as they control for more of the spurious factors which could influence outcomes. Involving
larger numbers of participants also increases reliability and the ability to generalize the findings to
other people, but the size and degree of experimental control of a study interact to improve
reliability, with experimental controls being the more crucial element.
Figure 1 shows a schematic view of this: increasing numbers of participants are shown on the x axis
and designs with increasing levels of experimental control on the y axis. Also marked on Figure 1 are
four hypothetical studies: Studies A and B have good experimental control, but Study B has many
more participants than Study A; Studies C and D on the other hand have poor experimental control
but D has more participants than C. The most reliable of these four studies is Study B with a good
experimental control and large numbers of participants and the least reliable is Study C, with a weak
experimental control and few participants. Choosing between Studies A and D however, is more
difficult and may depend on the question being asked. In both cases, a positive finding needs to be
replicated in another study with greater experimental control (for Study D) or more participants (for
Study A). However, a clinician may need to make clinical decisions based on evidence from studies
Appraising, interpreting and creating intervention research
such as A and D before more reliable studies have been carried out. In this case, a small study with
good experimental control is likely to be more reliable than a large study with weak control, but
both need to be treated with caution. In terms of carrying out studies, it could be argued that Study
D would waste resources (by involving a large number of participants, but in an experimental design
likely to produce unreliable results) and that Study A, which would be cheaper, may therefore be the
better option.
Figure 1 contributions of experimental control and numbers of participants to study robustness
For SLPs who are designing intervention studies, it is important to try to maximize both the number
of participants and the experimental control. If only a fixed number of participants are available, it is
particularly important to try to maximise the degree of experimental control. Conversely, if a
particular design has to be used (maybe due to practical restrictions), maximising the number of
participants is important. Later, I discuss different experimental designs and the level of control they
provide, starting with the least robust. For each, I first discuss the design in terms of timings and
Appraising, interpreting and creating intervention research
types of assessments relative to the intervention and then what factors each design can and cannot
control for.
In addition to the overall design of an intervention study in terms of timings of assessments and
interventions, other features are also important and should be considered by SLPs who are
appraising a study carried out by others or planning to carry out a study themselves. These features
include: how representative the participants are and how outcomes are assessed. In general,
findings can only be generalised to participants who are similar to those in the original study. In
order to investigate the effectiveness of the intervention in other groups, further studies will need to
be carried out. Assessment of outcomes is complex. The tests need to be appropriate to the
research question and the participants and sensitive to the intervention. For example, if an
intervention is hypothesised to cause a change in a very specific area of language, but the outcome
measure is a standardised test which only includes one question relating to the specific area, change
on that measure is unlikely, even if the intervention has caused large changes in the specific area of
language targeted. Thus, it is often necessary to create tests specifically for an intervention study.
Generalisation of new skills may also be important to assess. This may include generalisation to
standardised tests, but may also be to other areas of language and/or educational performance, or
to other situations such as general conversation or performance in the classroom. It is important to
consider in advance how much change you would expect or desire in these areas, again this comes
back to the research question. If the main aim of an intervention is to improve performance in the
classroom, this would be the primary outcome and crucial to measure. However, if the aim is to
improve a small area of functioning with a very short intervention, a change in classroom
performance may not be expected as this may require many more hours of intervention, and thus
may not be relevant to measure.
   
individual participants fit into the design of the study. Thus, they may not know which participants
Appraising, interpreting and creating intervention research
have versus have not had intervention, or they may know the participant has had intervention, but
not which items in the test battery have been targeted versus not targeted. Having blind assessors
reduces the chance of bias, both during the assessment and scoring process. In our research, we
have sourced blind assessors from various places: student SLPs who are on placement, or who come
on a voluntary basis in order to gain experience of research (Ebbels, Nicoll, Clark, Eachus, Gallagher,
Horniman et al., 2012), SLP assistants within the team who have been kept blind the content of the
(Ebbels, Maric, Murphy, & Turner, 2014) or SLPs swapping with
other SLPs in the same team who again are unaware of the precise nature of the intervention each
participant has received (Ebbels, Wright, Brockbank, Godfrey, Harris, Leniston et al., submitted).
At a minimum, assessments should be carried out before and after intervention (methods of
increasing experimental control are discussed below). However, it might also be important to test
whether new skills are maintained after a period of time. Intervention studies often have a
hypothesis that intervention will improve skills, but what happens after intervention ceases is also of
interest; new skills may diminish (i.e., the intervention has only a short-term effect), or they may
remain stable (i.e., the gains from intervention are maintained), or they may even improve further
(i.e., the intervention has triggered a change which continues after the intervention has ceased).
Degree of experimental control
In sections 1-10 below, I discuss in turn each experimental design shown in Figure 1 and their
strengths and limitations, working from those with the least to most experimental control.
1. Anecdotes and clinical experience
SLPs clinical experience together with information and anecdotes from colleagues are used more
frequently than other sources of information for guiding their intervention decisions (Nail-Chiwetalu
& Ratner, 2007; Zipoli & Kennedy, 2005). However, while such information may provide a useful
starting point in considering whether to use an intervention, anecdotes and clinical experience alone
 (Casarett, 2016;
Appraising, interpreting and creating intervention research
Thomas, 1978), whereby everyone involved in an intervention (both clinicians and patients), believes
the intervention is more effective than it actually is. We may interpret a change on our measures as
an intervention effect, when it may actually be random variation, 
other factors unrelated to the intervention, . Regression to the mean is a
phenomenon in which extreme test scores tend to become less extreme (regress to the mean) when
the test is repeated. This is a problem when participants or targets have been chosen for
intervention due to low levels of performance on a measure which is subsequently used to evaluate
 
but happens to score 83 on a particular day. If intervention is provided for all children with scores
               
more likely to be near their true score of 90. This would appear to be an improvement, when in
actual fact it is merely due to random variations in their scores. Conversely, consider child B whose
   subsequent score would be expected to be
more similar to their true score of 80 (i.e., decrease) at the next test point. When evaluating the
performance of a g        out child A 
increase, but not if child B has been excluded from intervention due to a pre-intervention score
above the cut-off. Now, imagine a study which includes several (or many!) children whose pre-
intervention scores fall on the opposite side of the cut-off to their true scores. If this study gives
intervention only to the half with artificially low pre-intervention scores and not to the half with
artificially high pre-intervention scores, the intervention group is likely to have on average higher
scores post-intervention, but this spurious increase in scores is purely due to random variation and
regression to the mean of extreme scores; it is not an effect of intervention.
Thus, clinicians need to recognize that clinical practice which relies on just anecdotes and experience
could be flawed and lead to clinical experience which consists merely of s
(Isaacs & Fitzgerald, 1999; O'Donnell & Bunker, 1997). In order to avoid
Appraising, interpreting and creating intervention research
this, we need to look to studies which aim to reduce some of the biases to which we are all
2. Pre and post-intervention measures
A first step to reducing bias when evaluating an intervention, is to measure performance before and
after intervention on a measure which is relevant to the intervention. In order to reduce bias, this
should be carried out in the same way on both occasions (e.g., same test items, scoring and rating
procedure, situation and tester) . Asking those involved with the
client (including the SLP) if they think there is improvement can give some measure of functional
closely involved in the intervention.
Interpreting pre- and post-intervention measures
Assuming that two scores have been obtained, one pre- and one post-intervention, the next
question is: what do these results mean? Do they show good progress? The post-intervention value
being higher than the pre-intervention value may or may not mean good progress has been made.
This depends on the degree of difference between the two scores, what the two scores represent
and whether this difference is important. For example, a difference of five between two scores
might be important if this represents a change on a test of understanding classroom instructions
from 3/8 to 8/8 or a change in life expectancy from 50 to 55 years. However, if the change is from
50% to 55% on correct production of a target phoneme in words, this may not be important to the
client and also may just be random variation in performance from one testing point to the next.
Statistical tests are available for measuring whether a change on a test which is carried out twice is
significant. For an introduction to suitable tests aimed at SLPs see Pring (2005).
Let us assume that our pre- and post-intervention tests differ significantly. In these circumstances,
can we infer that the intervention has been effective? No. It may be that the intervention was
Appraising, interpreting and creating intervention research
effective, but it is also possible that an array of other factors unrelated to the intervention have led
to the increase in score.
What other factors could be responsible for ‘progress’?
For children, maturation and general development are likely explanations for many changes in
performance. As children develop cognitively, physically and emotionally and gain in experience of
the world, merely by being alive in the world, we would expect performance to improve in most
areas. In addition, most children are receiving education, both formal (in schools and nurseries) and
informally at home and elsewhere. Thus, it is important to know what you would expect in terms of
change for a child in a similar situation of a similar age not receiving the intervention. To interpret an
intervention as being effective, the progress with intervention needs to be greater than that which
would be expected without the intervention. Natural history is also important in more medical
situations, where some spontaneous recovery might be expected, so successful interventions would
need to show that they have accelerated that recovery. For clients with degenerative conditions, a
successful intervention may slow the rate of decline. Thus, in all client groups, it is crucial to be able
to compare changes with intervention to changes which would have been expected if the
intervention had not been provided.
Another factor which is important to consider with repeated measurements is regression to the
mean and practice effects. To reduce regression to the mean, studies should avoid selecting items or
participants based on particularly low scores on the first assessment or use different measures for
identification of participants from the measure(s) used to evaluate progress. If the same assessment
needs to be used for identification and evaluation of progress, studies could include a baseline
period, so regression to the mean occurs before intervention starts (see sections 4, 6, 7, 9).
Alternatively, studies could use or control areas which have similar pre-intervention scores to the
target area, or control items, which are selected using the same criteria, but which are not treated
(see section 5, 6, 7, 9). The most common method is to use control participants, who are identified
Appraising, interpreting and creating intervention research
using the same criteria, but do not receive intervention. Thus regression to the mean should be
similar in both the intervention and control groups (see sections 8-10). In addition, mere experience
with a task could also improve performance on the second occasion due to practice effects, even if
underlying skills have not improved. To control for practice effects, a study could test participants on
control items the same number of times as target items, so a practice effect would affect both
targets and controls (see sections 5-7), or test control participants on the test items without
providing them with intervention (see sections 8-10)
Thus, in order to conclude that an intervention has been effective, we need to know whether
progress is different from what would be expected without the intervention given other potential
factors (natural history, maturation, regression to the mean, practice or placebo effects, other
interventions / education they are receiving). The different designs described in sections 3-10
control to a greater or lesser extent for each of these and we will go through these designs from the
least to most robust and discuss the degree to which they control for these different factors.
3. Change in standard score
Standard scores on standardised tests can help to control for maturation and general world
experience in children. Increasing standard scores would indicate that a child is progressing at a
faster rate than the children in the standardisation sample and thus progress is greater than would
be expected given general maturation and world experience.
Therefore, if a child has low performance on a standardised language test, for example, their SLP
could look to see both whether both the raw and standard scores improve. If their raw scores
improve, this indicates progress relative to their own pre-intervention scores, but despite improving
raw scores, their standard scores may decrease or remain stable, or indeed they may increase. If
their standard scores or 
typically developing peers, if they remain stable, they are making progress at the same rate and if
they decrease, the gap is widening.
Appraising, interpreting and creating intervention research
Standard scores provide information about performance relative to the children in the
standardisation sample of the test. It may be, however, that for a particular group of children,
different patterns of progress are expected. Again, it is important to know the natural history for
particular groups. For example, studies have shown that for children with DLD, with respect to their
understanding of vocabulary, the gap tends to widen with age between their performance and that
of typically developing children (Rice & Hoffman, 2015). This widening gap is despite increasing raw
scores and is probably due to a slower rate of vocabulary learning among this group, relative to the
efficient vocabulary learning of typically developing children and teenagers. In other areas of
language, such as expressive language, the trajectories of children with DLD parallel those of
typically developing children (Conti-Ramsden, St Clair, Pickles, & Durkin, 2012). Thus stable standard
scores are expected. If, in contrast, a study finds increased standard scores (e.g., Gallagher & Ebbels,
submitted), this indicates that progress in this area has accelerated.
Limitations of standard scores
While standard scores can control for maturation, they do not control for practice effects (although
standardised test manuals usually provide a time period after which you would not expect a practice
effect) or for other random or predictable factors such as other intervention or teaching which the
client may be receiving. Thus, while it may be possible to say that a child or group of children is
making faster than expected progress, it is not possible to say what factors underlie this progress.
Regression to the mean may be a problem when children have been selected for a study purely on
the basis of their low standard scores pre-intervention and progress with intervention is measured
on the same test (Tomblin, Zhang, Buckwalter, & O'Brien, 2003). This is less of a problem when they
have been selected on a different test or criteria, or if the pre-intervention test is carried out more
than once (in which case, regression to the mean would occur before intervention started).
Appraising, interpreting and creating intervention research
4. Within participant control (single baseline)
Some studies control for natural history and regression to the mean by using a baseline period. The
design of these studies is shown in Figure 2. These can be used for a single case or for a group of
Figure 2 within participants single baseline design
In this design, the same assessment is carried out at least twice before intervention starts. This
provides information about the rate of progress without intervention. This period of no intervention
before the intervention starts is known as the baseline period. If the intervention has no effect, we
would expect a similar rate of progress during the baseline and the intervention period. If the
baseline period is a similar length to the period of intervention, then no effect of intervention would
be shown by a similar degree of change between assessments 1 and 2 as between assessments 2
and 3. In contrast, a change of slope in the intervention period compared with the baseline period
could be due to the intervention (see Howard for a description of how to analyse this statistically
within a single subject). For examples of this design used with a group see Zwitserlood, Wijnen, van
Weerdenburg, and Verhoeven (2015), Bolderson, Dosanjh, Milligan, Pring, and Chiat (2011), Falkus,
Tilley, Thomas, Hockey, Kennedy, Arnold et al. (2016) and Petersen, Gillam, and Gillam (2008) and
for examples of studies with single cases see Riches (2013) and Kambanaros, Michaelides, and
Grohmann (2016).
Appraising, interpreting and creating intervention research
SLPs thinking of using this design need to plan ahead so that they can carry out a least two tests
prior to starting intervention. Ideally, if only two pre-intervention assessments are being carried out,
the gap between these should be similar to the predicted length of the intervention in order to
control for maturation. For SLPs working in schools, school holiday periods can work well as baseline
periods. If the first assessment is carried out before the holidays start, the second assessment and
the intervention can take place as soon as school resumes.
This design can help control for maturation (as long the rate of change due to maturation is
expected to be stable during the time period of the study), regression to the mean and practice
effects (unless the practice effect is cumulative such that it is stronger each time you repeat the
Limitations of single baseline design
Even if the slope during the intervention period is significantly different from during the baseline
period, the single baseline design only provides limited control over other random or predictable
factors. The change between the baseline and intervention period could be due to a placebo effect
(where merely seeing an SLP may lead participants to expect they will make progress, thus changing
their motivation and effort, leading to increased scores even though their underlying skills are
unhanged) and or motivation, health,
home or education situation, changes in other interventions or education being provided) which
may be exerting a general effect on their performance in all areas, including the area being
measured. It could be these other non-specific factors which are leading to the change in slope,
rather than the content of the intervention per se. In those situations where a withdrawal of
intervention is likely to lead to a withdrawal of the effect, a reversal design can be used. In this case,
if withdrawal of intervention leads to a reversal of performance trends, greater confidence can be
placed in the efficacy of the intervention. However, a reversal of intervention effects after
Appraising, interpreting and creating intervention research
intervention has ceased is virtually never a desired or expected outcome in SLP and thus the
withdrawal design is of limited use to the profession and as such, other designs are preferable.
5. Within-participant design with control items/area
In situations where all participants will receive intervention (i.e., there is no control group), a certain
degree of experimental control can be gained by comparing progress on areas or items you are
targeting versus areas or items you are not targeting and do not expect to improve. This design is
shown in Figure 3.
Figure 3 within participants design with control items / area
Both the control and targeted items/areas are tested pre- and post-intervention. In this design, the
comparison of interest is the difference in the progress made on targets versus controls. Any
progress seen on the controls could be due to general maturation, placebo or practice effects and/or
other non-specific factors which would be expected to affect both the targets and controls. Any
additional progress seen only on the targets is likely to be related to the intervention. It is important
that pre-intervention performance on targets and controls is similar as this reduces regression to the
mean and aids statistical comparisons and interpretation of the results. This design can be
strengthened if the targets and controls are counter-balanced across participants, such that the
control areas/items for some participants are the targets for others and vice versa.
For examples of studies which have used this design with single cases see Parsons, Law, and
Gascoigne (2005), for group studies which have combined a range of targets see Mecrow, Beckwith,
Appraising, interpreting and creating intervention research
and Klee (2010) and Ebbels et al. (submitted) and for studies with counter-balancing of targets and
controls across participants see Wright, Pring, and Ebbels (in prep) and Wilson, Aldersley, Dobson,
Edgar, Harding, Luckins et al. (2015).
Limitations of the within-participant design with control items or areas
This design can control for a wide range of factors. However, the choice of control items / areas is
crucial as the design relies on finding a difference in progress between targets and controls. If
progress on the targets generalizes to the control items/area, the experimental control may be
under threat. If the generalization is relatively limited, such that targets still show more progress
than controls, experimental control is maintained. However, if progress generalizes to such an
extent that targets and controls show equal progress, experimental control is lost. Equal progress on
targets and controls could be due to generalization (which is clinically desirable) or could be due to
maturation, placebo or practice effects and/or other non-specific factors. In this situation, even if
both targets and controls show good progress, it is impossible to draw conclusions regarding the
effectiveness of the intervention. Thus, it is crucial that when choosing control areas/items,
generalization is not expected.
If SLPs wish to consider the effects of generalization, additional control needs to be added to this
design, such as a control (baseline) period (see sections 6 and 7), or a control group (see sections 8-
6. Within-participant design with single baseline and control items/area
This design combines the two previous designs, using both a baseline period and control items/area
and is shown in Figure 4. Thus, if targeted items/area improve more with intervention than before
intervention and more than control items/area, this controls for maturation, placebo or practice
effects, regression to the mean and other factors which would be expected to improve the control as
well as the targeted items/area.
Appraising, interpreting and creating intervention research
This design has advantages over the use of control items with no baseline, as a change in controls
with intervention which is greater than the change during the baseline is more likely to be due to
generalisation than to practice effects or general maturation. A example of single case studies or
case series using this design are Kulkarni, Pring, and Ebbels (2014) and Best (2005).
Figure 4 within participants design with single baseline and control items/area
Limitations of within-participant designs with single baseline and control items/areas
While this design is stronger than previous designs, as changes seen in the control items during
intervention but not during baseline are unlikely to be due to maturation and practice effects, they
could still be due to a placebo effect or other factors which could be occurring in the cli 
around the time of changing from baseline to intervention. If the changes only occur in the targeted
items/areas and not the controls, it is likely that these are due to the intervention, but if they also
occur in the control items or areas, this weakens the design as it this could be due to generalisation,
or to other factors. Thus, as before, it is crucial to choose control items/areas to which
generalisation is not expected, otherwise experimental control can be lost.
7. Within-participant multiple baseline design
The key feature of a multiple baseline design is a staggered start to intervention. When used within
participants, it may be different items/areas which receive intervention but at different times. This
design is essentially the same as the previous design except the control items also receive
intervention but at a later date. This is illustrated in Figure 5. Thus, a baseline period is used (with at
Appraising, interpreting and creating intervention research
least two testing points), followed by intervention for Target A, while Target B is held in an extended
baseline. Following intervention for Target A, Target B is treated. Maintenance of Target A may also
be assessed at the final assessment point. If Target A improves more with the first intervention than
during baseline and more than Target B, this design controls for maturation, placebo and practice
effects. If Target B also improves more during its intervention period than during its baseline, this
provides better control for other factors. This is because, if both Targets A and B improve only when
their specific intervention is provided and not before, it is less likely that non-intervention-specific
factors are causing these specific changes. An example of a study using this design with a case series
is Culatta and Horn (1982)
Figure 5 within participants multiple baseline design across targets
Limitations of within-participant multiple baseline design
This design has similar limitations to the previous designs: if Target B improves during intervention
for Target A, (but not baseline) this still controls for maturation and practice effects, but a change
while still in extended baseline (while Target A is receiving intervention) could either be due to
generalization or other factors, including a placebo effect. In order to control for other factors such
as activities happening in classroom education, other children in the same class could act as controls,
as general classroom activities should affect their performance, but generalization from intervention
would not. Such an addition would then include comparisons between participants (see sections 8-
Appraising, interpreting and creating intervention research
8. Between participants comparisons (with non-random assignment)
Including as control participants other clients who have similar profiles and are in similar settings can
control for some of the effects of other non-specific factors and allow more reliable investigations of
the effects of generalisation. The most common design is to administer a pre- and post-intervention
measure to two groups of participants, but only provide intervention to one group. The crucial
comparison is between progress made by the intervention group and that made by the control
group. This design is shown in Figure 6. If the groups are very similar pre-therapy and the
intervention group make more progress than the control group, this controls for maturation,
practice effects and other factors which the two groups have in common, as these would be
expected to affect the performance of both groups.
Figure 6 Between participant comparisons
In order to make comparisons across participants with small numbers of participants, a between
participants multiple-baseline design can be used. This is similar to the within-participant multiple
baseline design (see Figure 5), except that it is participants rather than targets which have variable
baseline lengths. Thus, a single area may be targeted, but in more than one participant, with
staggered starts to intervention. If the slope of performance changes only when intervention is
introduced for each participant, with increasing numbers of participants this makes it more likely
that the intervention itself is responsible for the change. For an example of a study using this design,
see Petersen, Gillam, Spencer, and Gillam (2010).
Appraising, interpreting and creating intervention research
Limitations of between participants comparisons with non-random assignment
The main limitation of group comparisons of intervention and control participants is that the two
groups may differ from each other in ways which are predictable (e.g., different classes, schools,
teachers, abilities, backgrounds, other help/support) or unpredictable. Even if all obvious factors are
balanced between the groups, they may still differ in ways which have not been considered.
Therefore, differences between the groups in the amount of progress made during the intervention
period (for the intervention group), may be due to differences between the groups rather than to
the intervention. An example of this possible limitation is a study such as Motsch and Riehemann
(2008), where the teachers of the experimental group volunteered for an advanced course and
those of the control group did not, hence the teachers may have differed in fundamental ways (e.g.,
motivation) which could have affected the     more than the
nature of the intervention itself.
The best solution to this problem is to randomly assign participants to the groups as, if the numbers
are big enough, all potential factors should balance out between the groups (see section 10).
Another approach, especially with small numbers, is to combine a between-participant and within-
participant multiple baseline design (see section 9). An alternative solution is to provide the control
group with intervention after the experimental group has stopped receiving intervention (i.e., the
controls become a waiting cont
during their extended baseline it is less likely that other non-specific factors account for the
differences in progress between the groups after the first phase of intervention. Adding intervention
for a waiting control group, then becomes a similar design to the between-participants multiple
baseline designs (see Figure 5) often used for (a series of) single cases, where the waiting controls
are in effect held in an extended baseline and have a staggered start to intervention.
This design does not usually control for a placebo effect. However, this can be controlled for by
providing non-intervention-specific special attention to the control group instead of just no
Appraising, interpreting and creating intervention research
intervention. This could even be intervention but on a different, unrelated area (which is not
expected to generalise to the area under investigation). Indeed, in our research, we frequently use
this approach as all children in our setting have to receive intervention, so our (waiting) controls
receive intervention in a different area to that being investigated in the study, rather than no
intervention. This avoids the ethical dilemma of involving participants in a study who receive no
intervention whilst also controlling for possible placebo effects.
9. Combined between and within participant designs
Some group studies (e.g., Smith-Lock, Leitao, Lambert, & Nickels, 2013) add in within-participant
control by adding a baseline period for both the intervention and control groups. This follows a
similar pattern to Figure 4 but it is participants rather than items/areas which act as controls by
receiving either no intervention or, as in Smith-Lock et al. (2013) receiving intervention in a different
area, thus controlling for the placebo effect. This study also included a control measure for the
experimental intervention group, so placebo and non-specific effects were controlled both between
and within participants. Such additions strengthen the design and also allow researchers to look at
the performance of individuals within each group.
For studies with small numbers of participants, a multiple baseline design both between and within
participants is a strong design (see Figure 7). At least two participants are involved, but increased
numbers improves reliability and generalizability and also introduces the possibility of comparing
performance across groups. In this design, all participants undergo a baseline period with at least
two assessment points, then the two (groups of) participants receive intervention, but on different
targets. After a period of intervention, they both swap to the other target. If progress is seen on
each target only when it is targeted, it is likely that it is the intervention which underlies the change
rather than other non-specific factors (which would be expected to affect both targets regardless of
the focus of intervention). This is even more likely when the targets and participants are randomly
Appraising, interpreting and creating intervention research
assigned to the different periods of intervention and when more participants are included. Ebbels
and van der Lely (2001) used this design, albeit without randomisation.
Figure 7 between and within participants multiple baseline design
Limitations of combined between and within participant designs
As with previous within-participant designs, it is important that generalization does not occur
between the two targets. If intervention on either target improved performance equally on both
targets, the design would in effect be reduced to a single baseline design (see section 4), which has
less experimental control and where conclusions regarding the effectiveness of the intervention are
harder to draw. Thus, it is essential that the target areas are chosen very carefully such that
generalization between them is not expected.
10. Between participant design (randomised control trial)
The 
sufficiently large numbers and random assignment all potential factors other than the intervention
become evenly distributed between the groups and are thus unlikely to affect the results. The design
of an RCT at its simplest is represented in Figure 6. This design is feasible within clinical practice,
although it is easiest where intervention is 1:1. For example, if a group of clients are all due to
Appraising, interpreting and creating intervention research
receive a period of intervention;        
versus s and assessed before and after intervention is provided. A  
can take various forms, which SLPs may view as more or less ethically acceptable. They could receive
no intervention (e.g., Fey, Cleave, & Long, 1997), or    (e.g., Adams, Lockton,
Freed, Gaile, Earl, McBean et al., 2012; Boyle, McCartney, O'Hare, & Forbes, 2009; Cohen, Hodson,
O'Hare, Boyle, Durrani, McCartney et al., 2005), or intervention in a different area (e.g., Ebbels, van
der Lely, & Dockrell, 2007; Mulac & Tomlinson, 1977) or a non-specific intervention (e.g., the
"academic enrichment group" in Gillam, Loeb, Hoffman, Bohman, Champlin, Thibodeau et al., 2008)
which are not predicted to affect the target area. Alternatively, t     
     experimental intervention after intervention for the 
has 
controls could either receive no intervention (e.g., Fey, Cleave, Long, & Hughes, 1993; Fey, Finestack,
Gajewski, Popescu, & Lewine, 2010; Fricke, Bowyer-Crane, Haley, Hulme, & Snowling, 2013), or they
could receive intervention on a different, unrelated area which is not expected to affect the target
area (e.g., Ebbels et al., 2014; 2012). Studies vary in whether they report the progress made by the
waiting control group (which delays publication of the study, but strengthens the findings), or only
the results after the first stage of intervention for the experimental group. Clinicians often worry
about the ethics of control groups. In my view, if there is as yet no evidence an experimental
intervention may be effective; it is perfectly acceptable to withhold this intervention for the
purposes of a study which could contribute future evidence. Indeed, waiting control groups may get
the best deal, particularly if they only receive the experimental intervention if the first phase of the
trial indicates it is effective and not if there is doubt about its effectiveness.
This design can also be extended to investigate generalisation by including assessments of items or
areas where generalisation is expected. Both groups of participants are tested on both target and
generalisation items, but only the intervention group receives intervention. If this group improves on
both controls and targets, but the control group do not, it is likely that the progress on the
Appraising, interpreting and creating intervention research
generalisation test is due to the intervention. It could also be due to a placebo effect, but this could
be controlled by giving intervention to the control group on another area at the same time. Findings
from RCTs can be further strengthened by using a waiting control group, who then go on to receive
intervention. If they also make progress after intervention, but not while acting as controls, this
strengthens the conclusion that the intervention is effective. We carried out an RCT using this design
(Ebbels et al., 2014) and included both a control structure (where we did not expect generalisation)
and a generalisation test (where we were specifically looking for generalisation when the target
intervention was received). These extensions to the basic design in Figure 6, however, while
strengthening the design, do make it much more complex and thus more difficult to carry out. As an
example of an extended and more complex design see Figure 8 for the design of the Ebbels et al.
(2014) study.
Figure 8 randomised control trial with waiting controls, plus control and generalisation tasks as
used in Ebbels et al. (2014)
Appraising, interpreting and creating intervention research
Limitations of RCTs
Randomised control trials are the most robust design. However, it is important that the numbers in
the randomisation sample are sufficient that randomisation is likely to have led to a balance of all
potential influencing factors between the groups. If a study has too few participants, the design in
section 9 may be more appropriate. Ideally, randomisation would be carried out at the level of the
individual, but in some studies this is not possible. For example, an intervention involving training of
education staff may need to be carried out at a school level. While schools could be randomised to
different groups, the students within those schools have not been randomised and thus large
numbers of schools would be required for potential factors to be evenly distributed between the
groups. This design (known as a cluster randomised control trial) is complex to design and analyse
but the majority of such studies do not account for clustering in their design or analysis (Campbell,
Elbourne, & Altman, 2004). For example, a trial involving two schools which are randomised one to
receive and one not to receive intervention (such as Starling, Munro, Togher, & Arciuli, 2012) is not
an RCT as the participants are not randomly allocated to schools, so there is no guarantee that the
two schools, the staff teaching in them and the students attending them do not differ in some
important ways (indeed this is very likely).
As with other designs, placebo effects can only be controlled for if the control group receives some
kind of            
Interpreting the results of a study
The design of a study can be appraised in terms of its robustness without reading the results or
discussion. Indeed some suggest (Greenhalgh, 1997) that readers should decide whether or not to
read a paper by first reading the method only and if the design is insufficiently robust, to abandon
reading the rest of the paper as it       When considering the
robustness of the design, the reader needs to consider: the degree of experimental control provided
Appraising, interpreting and creating intervention research
(see above) and the number of participants (generally greater numbers of participants increases
reliability). For studies with a robust design and large number of participants, more confidence can
be placed in the results (see Figure 1), whether those results are in favour of the intervention
studied, or not.
Having decided that a study has a robust design with a sufficient number of participants to produce
reasonably reliable results, the reader needs to consider other points before deciding whether or
not to use the intervention in clinical practice. The first is whether the results are statistically
significant and the degree of significance (lower p-values are more significant). In general, a
marginally significant result should be considered with more caution than a highly significant result.
The second factor to consider is the size of the effect and whether it is relevant to the clients (i.e., is
it clinically significant?). The third factor interacts with consideration of the size of the effect and this
is the amount and cost of the input which is required to obtain that effect. An intervention which
has a small, but clinically relevant effect and costs very little to implement may be as worthwhile to
include in clinical practice as an intervention with a very large, clinically very important effect with a
high cost. However, interventions with small effects and high costs may not be appropriate to
include in clinical practice, even if they have statistically significant results. This is particularly the
case if other interventions have similar effects for lower costs, or larger effects for the same cost.
The final factor to consider is how similar the participants in the study are compared with to those in
. If the differences are too great, the study may be irrelevant to 
client group. However, if    are similar in some ways to the study participants but
different in others, it may be worthwhile trying the intervention. In this case, however, the SLP
should evaluate the effectiveness of the intervention with their different client group.
Appraising, interpreting and creating intervention research
How can I start to be research active and what support do I need?
For an SLP with a regular caseload, only a few tweaks may be needed to turn standard intervention
into a research project. All designs can be carried out as part of routine practice if everyone involved
is willing to be flexible and committed to the purposes of the project. Measuring indicators of
outcomes (what you want to achieve) before and after intervention is good clinical practice and can
form the beginnings of research. Thus, there is no definite line between research and good clinical
practice, but research generally includes greater controls. Even RCTs are feasible as part of clinical
     have huge numbers of participants if you are only interested in large
effects. Indeed, in my experience, I have found small-scale RCTs (e.g., Ebbels et al., 2014; 2012)
easier to carry out than within-participant designs (e.g., Ebbels et al., submitted). This is particularly
true where generalisation might be expected, as identifying suitable controls areas or items can be
very difficult.
The main requirements for carrying out research in clinical practice are time and support. Time is
needed for staff to develop research skills, and to design and carry out projects. Planning time needs
to be built in and time spent at the planning stage can dramatically improve the usefulness of a
project. The research design needs to be carefully thought through to maximise the robustness of
the design given practical constraints. Assessment and intervention plans, materials and resources
may need to be created specifically for a project. Those carrying out the intervention (and
assessments) will need training to ensure they carry these out to the requirements of the project
(SLP students can be a good source of assessors); this will also take time to organise. Inclusion of
your research project in your appraisal or progress review may allow for ring-fencing of time and
increased motivation to prioritise the project on all sides. In my organisation, half a day a week of
dedicated time has proved sufficient for clinicians to plan and coordinate small-scale research
projects, these include Ebbels and van der Lely (2001) and Wright et al. (in prep), while larger scale
Appraising, interpreting and creating intervention research
projects have required more dedicated time. The participants involved in a project will also need to
commit more time to a project than to usual intervention. This is mainly due to the increased
number of assessments required for more rigorous designs. They may also be required to attend for
more intervention. Hopefully, if the study is theoretically and clinically well-motivated, this increase
in time on their part will result in better outcomes for them, which is ethically more acceptable.
Carrying out a research project in clinical practice also requires support, particularly from the
management in your organisation. This is more likely to be forthcoming if your proposed research is
of direct clinical relevance to your service. However, you may also need the support of your
colleagues (particularly if they will be providing some of the intervention). Administrative support
would also be helpful. A crucial element, however, is to gain support from someone with research
expertise who can provide advice prior to the study on research design including how many
participants may be required, ethical requirements and options for analysis. On completion of your
study they can also advise on dissemination of your findings.
Clinical practice of SLPs will be improved if we all incorporate aspects of evidence-based practice
into our work. Whether we are interpreting the research studies of others, or designing our own, we
need a good understanding of research design and an ability to recognise weaknesses in
intervention studies which may reduce the reliability of study findings. Striving to maximise both the
robustness and clinical relevance of intervention studies and ensuring that SLPs have the time, skills
and support to read and (co-)create research and apply relevant findings to their clinical practice,
should be a priority for the profession.
Appraising, interpreting and creating intervention research
Adams, C., Lockton, E., Freed, J., Gaile, J., Earl, G., McBean, K., Nash, M., Green, J., Vail, A., & Law, J.
(2012). The Social Communication Intervention Project: a randomized controlled trial of the
effectiveness of speech and language therapy for school-age children who have pragmatic
and social communication problems with or without autism spectrum disorder. International
Journal of Language & Communication Disorders, 47(3), 233-244. Retrieved from
Best, W. (2005). Investigation of a new intervention for children with word-finding problems.
International Journal of Language & Communication Disorders, 40(3), 279-318.
Bolderson, S., Dosanjh, C., Milligan, C., Pring, T., & Chiat, S. (2011). Colourful semantics: A clinical
investigation. Child Language Teaching & Therapy, 27(3), 344-353. Retrieved from
Boyle, J. M., McCartney, E., O'Hare, A., & Forbes, J. (2009). Direct versus indirect and individual
versus group modes of language therapy for children with primary language impairment:
principal outcomes from a randomized controlled trial and economic evaluation.
International Journal of Language & Communication Disorders, 44(6), 826-846. Retrieved
from WOS:000275345700002
Campbell, M. K., Elbourne, D. R., & Altman, D. G. (2004). CONSORT statement: extension to cluster
randomised trials. Bmj, 328(7441), 702-708. doi:10.1136/bmj.328.7441.702
Casarett, D. (2016). The Science of Choosing Wisely Overcoming the Therapeutic Illusion. New
England Journal of Medicine, 374(13), 1203-1205. doi:doi:10.1056/NEJMp1516803
Cohen, W., Hodson, A., O'Hare, A., Boyle, J., Durrani, T., McCartney, E., Mattey, M., Naftalin, L., &
Watson, J. (2005). Effects of computer-based intervention using acoustically modified
speech (Fast ForWord-Language) in severe mixed receptive-expressive language impairment:
outcomes from a randomized control trial. Journal of Speech Language and Hearing
Research, 48(3), 715-729.
Conti-Ramsden, G., St Clair, M. C., Pickles, A., & Durkin, K. (2012). Developmental Trajectories of
Verbal and Nonverbal Skills in Individuals With a History of Specific Language Impairment:
From Childhood to Adolescence. Journal of Speech Language and Hearing Research, 55(6),
1716-1735. Retrieved from WOS:000314531600010
Culatta, B., & Horn, D. (1982). A Program for Achieving Generalization of Grammatical Rules to
Spontaneous Discourse. Journal of Speech and Hearing Disorders, 47(2), 174-180. Retrieved
from <Go to ISI>://A1982PC03100011
Ebbels, S., & van der Lely, H. (2001). Meta-syntactic therapy using visual coding for children with
severe persistent SLI. International Journal of Language & Communication Disorders,
36(supplement), 345-350. Retrieved from <Go to ISI>://000168604000062
Ebbels, S., Wright, L., Brockbank, S., Godfrey, C., Harris, C., Leniston, H., Neary, K., Nicoll, H., Nicoll,
      Effectiveness of 1:1 speech and language therapy for
older children with developmental language impairments.
Ebbels, S. H., Maric, N., Murphy, A., & Turner, G. (2014). Improving comprehension in adolescents
with severe receptive language impairments: a randomised control trial of intervention for
coordinating conjunctions. International Journal of Language & Communication Disorders,
49(1), 30-48.
Ebbels, S. H., Nicoll, H., Clark, B., Eachus, B., Gallagher, A. L., Horniman, K., Jennings, M., McEvoy, K.,
Nimmo, L., & Turner, G. (2012). Effectiveness of semantic therapy for word-finding
difficulties in pupils with persistent language impairments: a randomized control trial.
International Journal of Language & Communication Disorders, 47(1), 35-51.
Ebbels, S. H., van der Lely, H. K. J., & Dockrell, J. E. (2007). Intervention for verb argument structure
in children with persistent SLI: a randomized control trial. Journal of Speech Language and
Hearing Research, 50, 1330-1349.
Appraising, interpreting and creating intervention research
Falkus, G., Tilley, C., Thomas, C., Hockey, H., Kennedy, A., Arnold, T., Thorburn, B., Jones, K., Patel, B.,
       F., Leahy, R., & Pring, T. (2016). Assessing the
effectiveness of parentchild interaction therapy with language delayed children: A clinical
investigation. Child Language Teaching and Therapy, 32(1), 7-17.
Fey, M. E., Cleave, P., Long, S. H., & Hughes, D. L. (1993). Two Approaches to the Facilitation of
Grammar in Children with Language Impairment: An Experimental Evaluation. Journal of
Speech and Hearing Research, 36, 141-157.
Fey, M. E., Cleave, P. L., & Long, S. H. (1997). Two models of grammar facilitation in children with
language impairments: phase 2. Journal of Speech Language and Hearing Research, 40, 5-19.
Fey, M. E., Finestack, L. H., Gajewski, B. J., Popescu, M., & Lewine, J. D. (2010). A Preliminary
Evaluation of Fast ForWord-Language as an Adjuvant Treatment in Language Intervention.
Journal of Speech Language and Hearing Research, 53(2), 430-449. doi:doi:10.1044/1092-
Fricke, S., Bowyer-Crane, C. A., Haley, A. J., Hulme, C., & Snowling, M. (2013). Efficacy of language
intervention in the early years. Journal of Child Psychology and Psychiatry, 54(3), 280-290.
Gallagher, A., & Ebbels, S. H. (submitted). Language, literacy, numeracy and educational outcomes in
adolescents with developmental language disorder following education in a specialist
provision with integrated speech and language therapy; a service evaluation.
Gillam, R. B., Loeb, D. F., Hoffman, L. M., Bohman, T., Champlin, C. A., Thibodeau, L., Widen, J.,
Brandel, J., & Friel-Patti, S. (2008). The efficacy of Fast ForWord Language Intervention in
school-age children with language impairment: A Randomized controlled trial. Journal of
Speech Language and Hearing Research, 51(1), 97-119. doi:doi:10.1044/1092-
Greenhalgh, T. (1997). How to read a paper. Getting your bearings (deciding what the paper is
about). BMJ: British Medical Journal, 315(7102), 243.
Isaacs, D., & Fitzgerald, D. (1999). Seven alternatives to evidence based medicine. Bmj, 319(7225),
1618. doi:10.1136/bmj.319.7225.1618
Kambanaros, M., Michaelides, M., & Grohmann, K. K. (2016). Cross-linguistic transfer effects after
phonologically based cognate therapy in a case of multilingual specific language impairment
(SLI). International Journal of Language & Communication Disorders, n/a-n/a.
Kulkarni, A., Pring, T., & Ebbels, S. (2014). Evaluating the effectiveness of Shape Coding therapy to
develop the use of regular past tense morphemes in two children with language
impairments. Child Language Teaching and Therapy, 30(3), 245-254.
Mecrow, C., Beckwith, J., & Klee, T. (2010). An exploratory trial of the effectiveness of an enhanced
consultative approach to delivering speech and language intervention in schools.
International Journal of Language & Communication Disorders, 45(3), 354-367. doi:DOI:
Motsch, H. J., & Riehemann, S. (2008). Effects of 'Context-Optimization' on the acquisition of
grammatical case in children with specific language impairment: an experimental evaluation
in the classroom. International Journal of Language & Communication Disorders, 43(6), 683-
698. doi:DOI: 10.1080/13682820701794728
Mulac, A., & Tomlinson, C. N. (1977). Generalization of an operant remediation program for syntax
with language delayed children. Journal of Communication Disorders, 10, 231-243.
Nail-Chiwetalu, B., & Ratner, N. B. (2007). An assessment of the information-seeking abilities and
needs of practicing speech-language pathologists. Journal of the Medical Library Association,
95(2), 182.
O'Donnell, M., & Bunker, J. (1997). A sceptic's medical dictionary. BMJ-British Medical Journal-
International Edition, 315(7119), 1387-1387.
Appraising, interpreting and creating intervention research
Parsons, S., Law, J., & Gascoigne, M. (2005). Teaching receptive vocabulary to children with specific
language impairment: a curriculum-based approach. Child Language Teaching and Therapy,
21(1), 39-59.
Petersen, D. B., Gillam, S. L., Spencer, T., & Gillam, R. B. (2010). The Effects of Literate Narrative
Intervention on Children With Neurologically Based Language Impairments: An Early Stage
Study. Journal of Speech Language and Hearing Research, 53(4), 961-981. Retrieved from
Petersen, D. R., Gillam, S. L., & Gillam, R. R. (2008). Emerging procedures in narrative assessment -
The index of narrative complexity. Topics in Language Disorders, 28(2), 115-130. Retrieved
from WOS:000256043300005
Pring, T. (2005). Research Methods in Communication Disorders. London: Whurr Publishing for
Rice, M. L., & Hoffman, L. (2015). Predicting vocabulary growth in children with and without specific
language impairment: a longitudinal study from 2;6 to 21 years of age. Journal of speech,
language, and hearing research : JSLHR, 58(2), 345-359. Retrieved from MEDLINE:25611623
Riches, N. (2013). Treating the passive in children with specific language impairment: A usage-based
approach. Child Language Teaching and Therapy, 29(2), 155-169.
Sackett, D. L., Rosenberg, W. M., Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence
based medicine: what it is and what it isn't. Bmj, 312(7023), 71-72.
Smith-Lock, K. M., Leitao, S., Lambert, L., & Nickels, L. (2013). Effective intervention for expressive
grammar in children with specific language impairment. International Journal of Language &
Communication Disorders, 48(3), 265-282. Retrieved from WOS:000318572000003
            
difficulties: Creating a virtuous circle. British Journal of Educational Psychology, 81(1), 1-23.
Starling, J., Munro, N., Togher, L., & Arciuli, J. (2012). Training secondary school teachers in
instructional language modification techniques to support adolescents with language
impairment: a randomized controlled trial. Language, Speech, and Hearing Services in
Schools, 43(4), 474-495. Retrieved from MEDLINE:22826368
Thomas, K. B. (1978). The consultation and the therapeutic illusion. Bmj, 1(6123), 1327-1328.
Tomblin, J. B., Zhang, X., Buckwalter, P., & O'Brien, M. (2003). The stability of primary language
disorder: Four years after kindergarten diagnosis. Journal of Speech Language and Hearing
Research, 46(6), 1283-1296.
Wilson, J., Aldersley, A., Dobson, C., Edgar, S., Harding, C., Luckins, J., Wiseman, F., & Pring, T. (2015).
The effectiveness of semantic therapy for the word finding difficulties of children with
severe and complex speech, language and communication needs. Child Language Teaching
and Therapy, 31, 7-17.
Wright, L., Pring, T., & Ebbels, S. H. (in prep). Effectiveness of vocabulary intervention for older
children with Developmental Language Disorder (DLD).
Zipoli, R. P., & Kennedy, M. (2005). Evidence-Based Practice Among Speech-Language
PathologistsAttitudes, Utilization, and Barriers. American Journal of Speech-Language
Pathology, 14(3), 208-220.
Zwitserlood, R., Wijnen, F., van Weerdenburg, M., & Verhoeven, L. (2015). 'MetaTaal': enhancing
complex syntax in children with specific language impairment-a metalinguistic and
multimodal approach. International Journal of Language & Communication Disorders, 50(3),
273-297. Retrieved from WOS:000353418200001
... The Control group received the same language intervention one week after the second assessment session and were assessed once more by the end of their intervention. Despite not intended at the beginning of the clinical education courses, the data obtained eventually fit into a between-participant design (Ebbels, 2017). Pre-and post-intervention language measures were obtained. ...
... Due to the pandemic in 2020, a convenient sampling in the format of retrospective study of existing data was used in participant recruitment. While the current design was the best we could achieved during the pandemic in 2020, so as to fulfil the strong demands of language intervention by the parents of children with language difficulties at that time, it would be more ideal if a real waitlist crossover randomised controlled trial design (Ebbels, 2017) was used. ...
... It is suggested that future studies can be conducted to replicate the current findings on children diagnosed as having language disorder using norm-referenced language assessment. This will help not only to confirm the applicability of the results of the current study to individuals with language disorder, but also to improve the confidence of the results (Ebbels, 2017). ...
It has been well-documented that language input designed according to the principles of statistical learning can promote language acquisition among children with or without language disorder. Cantonese-speaking children with language disorder were reported to have difficulties using expanded verb phrases and prepositional phrases, but the corresponding intervention is relatively unexplored. The current study evaluated the efficacy of an intervention designed using the statistical learning principles to promote the acquisition of these two structures. A retrospective study of existing data collected from a total of 16 Cantonese-speaking children (four female; mean age = 6.70 years) with suspected language disorder was conducted. The participants were initially divided into the ‘Treatment’ and the ‘Control’ groups. A total of eight sessions of language treatment, which focused on giving systematic language input of expanded verb phrases and prepositional phrases, were conducted on each child. Results showed that the Treatment group produced significantly more expanded verb phrases in the post-treatment language samples, while the Control group did not. The final pre- and post-comparison conducted after the Control group also received treatment indicated overall significant increased number of expanded verb phrases produced across time. On the contrary, improvement in the production of prepositional phrases was not significant. It is suggested that the unique thematic roles coded by individual prepositions possibly restricted the generalisation effect of treatment, which explains the non-significant improvement across time. Theoretical and clinical implications were discussed.
... Dans le contexte d'une réflexion globale sur la pratique fondée sur les preuves (Evidence Based Practice ou EBP), les orthophonistes sont de plus en plus invités à mesurer l'efficacité de leur intervention (Ebbels 2017). Pour ce faire, ils doivent identifier des objectifs précis et argumentés, et déterminer les modalités d'intervention thérapeutique leur paraissant les plus adaptées (Martinez et al. 2015). ...
... Etant donné le caractère écologique de cette recherche, il est possible toutefois que d'autres facteurs soient à l'origine des différences observées (Ebbels 2017), en particulier dans le cas de la dyade ORT2-Lilou car différents types d'activités ont été enregistrés, dans un intervalle de près de neuf mois au cours duquel les compétences de l'enfant ainsi que les objectifs thérapeutiques ont évolué. Pour pouvoir évaluer de manière plus précise l'impact des échanges dans le groupe sur les productions de l'orthophoniste, tout en maintenant les fondements méthodologiques déterminés, nous envisageons de créer des lignes de base (Martinez et al. 2015) en comparant, par exemple, la proportion de questions et leurs types à la proportion des reformulations et leurs types. ...
... Pour confirmer l'hypothèse que l'allongement des énoncés des enfants est lié à l'évolution des modes d'intervention langagière des orthophonistes, il serait utile là aussi de créer des lignes de base permettant de contrôler l'impact d'autres facteurs (Ebbels 2017). En pratique, il paraît toutefois difficilement envisageable de séparer le rôle des modes d'intervention langagière ciblés par l'orthophoniste des autres outils que le professionnel mobilise simultanément pour atteindre ses objectifs thérapeutiques. ...
This paper presents the collaborative establishment of a group of speech and language pathologists (SLP) and linguists that focuses on scaffolding and its effects in interactions during therapy. Based on transcribed audio recordings made by the practitioners themselves, qualitative analyses were collectively conducted and individual scaffolding objectives relating to specific therapeutic goals were identified by the SLP. Quantitative analyses on two dyads suggest that the qualitative analyses conducted in the group influenced the SLP's productions which in turn might have had an impact on the children's productions. We discuss the need to use baselines adapted to ecological settings in further studies, to better evaluate the effects of other internal and external factors. We conclude by considering the benefits of this type of collaboration both for practitioners and linguists.
... Elle s'appuie en grande partie sur l'expertise du clinicien (Dodd, 2007). Pour permettre au clinicien de créer ses propres preuves, plusieurs auteurs (Dodd, 2007;Dollaghan, 2007;Ebbels, 2017;Olswang & Bain, 1994) proposent une méthode de mesure par lignes de base, dans une démarche proche de celle de la recherche. Ces données doivent ainsi répondre à un certain nombre de critères pour être valides. ...
... Enfin, les modalités de passation des lignes de base choisies par les participants sont majoritairement conformes aux recommandations pour limiter les biais d'interprétation. En effet, les orthophonistes ont majoritairement choisi de faire passer un pré-test sur toutes les listes, puis un ou deux post-tests sur toutes les listes ; ces modalités sont conformes aux modèles les plus rigoureux préconisés par la littérature(Ebbels, 2017), avec une mesure contrôle et une phase de maintien après traitement. Pour synthétiser ces vérifications d'hypothèses, un profil idéal, qui favoriserait la pratique des mesures d'efficacité thérapeutique chez les orthophonistes, peut être dessiné. ...
Full-text available
La mesure de l’efficacité dans les interventions orthophoniques s’appuie en grande partie sur l’expertise du clinicien, à l’aide d’une prise de mesures régulière sur l’évolution du patient. Mais malgré une adhésion majoritaire aux principes de l’Evidence-based Practice (EBP), les orthophonistes francophones utilisent encore peu les outils de mesure d’efficacité, comme les lignes de base ou les questionnaires d’auto-évaluation des patients. Les barrières à leur usage sont le manque de formation et le manque de temps. Or la parution récente des Recommandations de Bonne Pratique (RBP) dans le domaine du langage écrit, pilotée par le Collège Français d’Orthophonie, préconise l’usage de ces outils. L’objectif de cette étude était de mesurer l’impact d’un outil d’information sur les lignes de base et l’auto-évaluation du patient, dans le domaine des troubles du langage écrit, destiné aux orthophonistes exerçant en libéral ou en mixte. Une méthode en trois étapes a été choisie : après un questionnaire sur les pratiques et les attentes des participants concernant les lignes de base, un site internet a été conçu et mis à disposition des participants pendant trois mois. Enfin un questionnaire a permis de mesurer l’impact du système d’information. Les résultats apparaissent en partie similaires aux enquêtes antérieures. Ils permettent de dresser un profil idéal qui favoriserait la pratique des mesures d’efficacité : l’orthophoniste a reçu une formation initiale sur l’EBP et l’a complétée par une démarche personnelle. Les modèles disponibles lui permettent de mettre rapidement en pratique les mesures d’efficacité. A partir de trois lignes de base réalisées, le thérapeute gagne en autonomie pour créer ses propres mesures. A ce stade, il se tourne vers les auto-évaluations de ressenti du patient. Après deux questionnaires, le clinicien estime avoir gagné en efficacité et amélioré la motivation de son patient. La démarche de mesure d’efficacité a ainsi plus de chances d’être intégrée et conservée.
... Algunos modelos de PBE, como el modelo Implementación de la evidencia científica incorporar investigaciones nuevas y de alta calidad de evidencias que tenga implicaciones para la práctica clínica. En este sentido, a diferencia de Drisko y Grady (2012), algunos autores plantean factible la implementación de investigación con el uso de diseños experimentales complejos aplicados al contexto clínico o educativo (Ebbels, 2017). Se debe tener en cuenta que para realizar una investigación aplicada en contexto clínicos y educativos se necesita que esta sea parte de los objetivos de la institución, una buena programación anticipada, profesional con conocimiento en investigación y un equipo motivado a participar en el proceso de investigación. ...
... Sin embargo, habrá un efecto de intervención, si se genera un cambio entre las medidas del período base y del período de intervención. Ampliar información sobre diferentes diseños experimentales de intervención, su grado de control y las limitaciones metodológicas propias a cada uno en Ebbels (2017). ...
... One of the study's key conclusions is the value of using both baseline and maintenance periods alongside more traditional pre and post-intervention assessment measures in repeated measures design studies. They also state that investigating changes in scores between repeated outcome measure time points can help identify whether changes in scores may be due to natural maturation or effective interventions and whether any progress made can be maintained in the longer-term (Ebbels, 2017). ...
Introduction: Nine to 16 year olds with developmental language disorder (DLD) tend to have significant difficulties with understanding and using idioms. However, research investigating methods to assess or improve these skills has been limited. Aims: 1. Examine idiom skills in Typically Developing (TD) children and children with DLD. 2. Investigate the effectiveness of idiom skills intervention delivered through 1:1 SLT and classroom-based sessions for children with DLD. Methods: Seventy-two TD children attending mainstream schools and fifty-eight children attending a specialist school for children with DLD completed a bespoke idiom skills assessment. Forty-nine of the children with DLD (aged nine-16) then received twenty idiom skill intervention sessions during two school terms. Following a baseline period of one term, twenty-five participants (aged 11-16) received 1:1 intervention for one term and classroom-based intervention for the next term. The other twenty-four participants (aged 9-16) received classroom-based followed by 1:1 intervention. Classroom-based intervention was delivered collaboratively by English teachers and SLTs during English lessons and 1:1 intervention by the participants’ usual SLT. Intervention was the same for both delivery methods involving a prescriptive powerpoint presentation and worksheet alongside discussion. All participants were assessed on their ability to identify, understand, explain and use idioms before and after each intervention, using a bespoke assessment including 48 idioms which were randomly assigned to three sets: 16 idioms targeted in 1:1 SLT, 16 targeted in classroom-based intervention and a control group of 16 idioms that were not targeted. Results and Conclusions: TD participants achieved higher scores than DLD participants on all aspects of testing. Both 1:1 SLT and classroom-based delivery methods were effective for improving idiom skills but there was not sufficient evidence to show that idiom skills generalised. Intervention can be effective for improving the idiom skills of nine-16 year olds with severe DLD. More research is required to investigate methods to generalise idiom skill components, especially receptive idiom skills.
... Pour autant, l'étude ne peut permettre, en l'état, de reconnaître l'intervention proposée sur la comptine numérique et le comptage comme une pratique probante. En effet, bien que l'étude de cas puisse être d'une très grande rigueur méthodologie (Ebbels, 2017), elle documente une pratique comme étant probante lorsque les effets expérimentaux sont reproduits dans un nombre suffisant d'études (5), de chercheurs (3) et de participants (20) pour Horner et al. (2005). Nous encourageons d'ailleurs les orthophonistes qui utiliseraient les activités d'apprentissage du protocole avec leurs patient·e·s à communiquer ensuite les résultats et l'évolution des habiletés. ...
Conference Paper
Intervention ciblant la comptine numérique chez des enfants ayant un trouble développemental du langage : effet sur l'automatisation de la comptine numérique et transfert sur les habiletés de dénombrement et de calcul 17 Chapitre La cognition mathématique Résumé Les orthophonistes sont souvent confronté·e·s à des enfants ayant un trouble développemental du langage (TDL) qui développent conjointement des difficultés en mathématiques. En particulier, la compétence langagière est un des précurseurs nécessaires à la maîtrise de la comptine numérique. C'est pourquoi les enfants TDL présentent souvent, plus que les enfants tout-venant, des difficultés pour l'habileté de comptine numérique. Or, la récitation de la comptine numérique est une compétence qui s'acquiert précocement et qui est essentielle au développement des autres habiletés mathématiques. Ainsi, il est important d'intervenir précocement auprès des enfants TDL qui ne maîtrisent pas la comptine numérique. L'objectif de cette étude était d'évaluer l'efficacité d'un protocole d'intervention ciblant la récitation et la manipulation de la comptine numérique auprès de trois enfants ayant un TDL âgés de 4 à 6 ans. La méthode d'étude de cas multiples a été utilisée. L'intervention comprenait six séances de vingt minutes à raison d'une séance en individuel par semaine, ainsi qu'un accompagnement familial de dix minutes par semaine. Chaque séance se composait de quatre activités ludiques entraînant la récitation et la manipulation de la comptine numérique par l'intermédiaire de stimuli auditifs et/ou visuels ou faisant intervenir la motricité globale. Les résultats ont montré une amélioration des performances en comptine et en dénombrement pour les trois enfants et une amélioration des performances en calcul d'additions pour un des enfants. Sur le plan théorique, les résultats suggèrent une relation de causalité entre l'automatisation de la comptine numérique et l'amélioration des performances en dénombrement. Cette étude est encourageante pour la pratique clinique orthophonique et montre l'intérêt d'une intervention précoce axée sur l'apprentissage de la comptine numérique. Mots-clés Comptine numérique, dénombrement, calcul, trouble développemental du langage, intervention Chapitre 17 Abstract The speech-language therapists are often faced with children who have developmental language disorders (DLD) who also develop mathematics difficulties. Language skills are one of the precursors necessary for mastering rote counting. Children with DLD therefore may have more difficulties with rote counting than typically developing children. Recitation and manipulation of rote counting are early learning and are essential to the development of other mathematics skills. Thus, intervening early with children with DLD who have difficulty with rote counting is important. The objective of this study was to assess the effectiveness of a intervention protocol targeting the recitation and manipulation of the rote counting with three children with DLD aged from 4 to 6 years. The multiple case study method was used. The intervention consisted of six one-to-one twenty-minute sessions , one session per week, as well as parental support aimed at training for ten minutes per week. Each session consisted of four playful activities aimed at reciting and manipulating the rote counting through visual and auditory stimuli or involving motor skills. The results showed an improvement in counting and enumeration performance for the three children and an improvement in addition performance for one child. Results suggests a causal relationship between the automation of rote counting and the improvement of enumeration performance. This study, encouraging for speech-language therapy practice, demonstrated the interest of an early intervention focused on the training of the rote counting.
... All lessons were led by two persons in the research group. The present author was responsible for data collection and led all intervention lessons, for optimal reliability (Ebbels, 2017;Ebbels et al., 2019). A research assistant (Viktoria Åkerlund) ran films and slides during intervention lessons and assisted during data collection. ...
Full-text available
... We adopted a within-participant design with single baseline and control item (Ebbels, 2017) to address these questions written in the PICO format where applicable: ...
Enhanced Conversational Recast (ECR) is an input-based grammatical intervention approach developed from research on statistical learning. Recent research reported evidence demonstrating the efficacy of ECR on the learning of grammatically obligatory morphemes in English-speaking preschool children with developmental language disorder (DLD). This single-case experimental design study, which adopted a within-participant design with single baseline and control item, investigated the efficacy of ECR in promoting the learning of aspect markers in four Cantonese-speaking typically-developing preschool children. Two children demonstrated positive outcomes with the progressive aspect marker ‘ gan2’ given 12 ECR training sessions within a mean dosage of 288. One of these children demonstrated statistically significant gains in the percentage of correct use in the probes. The lack of positive outcomes in the other two children on the earlier developing aspect marker ‘ zo2’ and limitations of the study were discussed. With early evidence established in the typically developing children in this study, future research on Cantonese speaking children with DLD can be considered.
Full-text available
Purpose: Two disparate models drive American speech-language pathologists' views of evidence-based practice (EBP): the American Speech-Language-Hearing Association's (2004a, 2004b) and Dollaghan's (2007). These models discuss evidence derived from clinical practice but differ in the terms used, the definitions, and discussions of its role. These concepts, which we unify as clinical evidence, are an important part of EBP but lack consistent terminology and clear definitions in the literature. Our objective was to identify how clinical evidence is described in the field. Method: We conducted a scoping review to identify terms ascribed to clinical evidence and their descriptions. We searched the peer-reviewed, accessible, speech-language pathology intervention literature from 2005 to 2020. We extracted the terms and descriptions, from which three types of clinical evidence arose. We then used an open-coding framework to categorize positive and negative descriptions of clinical expertise and summarize the role of clinical evidence in decision making. Results: Seventy-eight articles included a description of clinical evidence. Across publications, a single term was used to describe disparate concepts, and the same concept was given different terms, yet the concepts that authors described clustered into three categories: clinical opinion, clinical expertise, and practice-based evidence, with each described as distinct from research evidence, and separate from the process of clinical decision making. Clinical opinion and clinical expertise were intrinsic to the clinician. Clinical opinion was insufficient and biased, whereas clinical expertise was a positive multidimensional construct. Practice-based evidence was extrinsic to the clinician-the local clinical data that clinicians generated. Good clinical decisions integrated multiple sources of evidence. Conclusions: These results outline a shared language for SLPs to discuss their clinical evidence with researchers, families, allied professionals, and each other. Clarification of the terminology, associated definitions, and the contributions of clinical evidence to good clinical decision-making informs EBP models in speech-language pathology. Supplemental material:
Full-text available
Parent–child interaction therapy (PCIT) is widely used by speech and language therapists to improve the interactions between children with delayed language development and their parents/carers. Despite favourable reports of the therapy from clinicians, little evidence of its effectiveness is available. We investigated the effects of PCIT as practised by clinicians within a clinical setting. Eighteen consecutive children referred for speech and language therapy because of their delayed language were entered in the study. A within-participants design was used, and the procedure was similar to that used clinically. Blind assessments were conducted twice before therapy to monitor change without therapy and once after completing PCIT. Significant changes in a parent rating scale, the children’s mean length of utterance and the ratio of time of child to parent speech were found after the therapy. No changes were detected prior to therapy. The similarity of the design to clinical practice and the use of several clinicians suggest that the findings can be generalized to other settings and clinicians. These findings are a first step in evaluating PCIT; we now need to show that parents can maintain their newly acquired skills in interaction, and that this benefits their children’s communication.
Full-text available
It has been suggested that difficulties with tense and agreement marking are a core feature of language impairment. Hence, studies are required that analyse the effectiveness of intervention in this area, including consideration of whether changes seen in therapy sessions generalize to spontaneous speech. This study assessed the effectiveness of therapy based around Shape Coding in developing the use of the regular past tense morpheme -ed in two school-aged children with language impairments. It also considered whether participants benefited from additional generalization therapy in order to start using target forms in their spontaneous speech. The former was assessed using a sentence completion task and the latter by a conversational task with blind assessors. One participant improved markedly in sentence completion but did not gain in the conversation task until after the generalization therapy. The other made more modest gains on the sentence completion task and seemed to generalize to the conversation task without recourse to the generalization therapy. Larger studies are required to confirm these interpretations and to determine whether they are applicable to the wider population of children with language impairments.
Full-text available
Background: There is a growing body of research that evaluates interventions for neuropsychological impairments using single-case experimental designs and diversity of designs and analyses employed. Aims: This paper has two goals: first, to increase awareness and understanding of the limitations of therapy study designs and statistical techniques and, second, to suggest some designs and statistical techniques likely to produce intervention studies that can inform both theories of therapy and service provision. Main Contribution & Conclusions: We recommend a single-case experimental design that incorporates the following features. First, there should be random allocation of stimuli to treated and control conditions with matching for baseline performance, using relatively large stimulus sets to increase confidence in the data. Second, prior to intervention, baseline testing should occur on at least two occasions. Simulations show that termination of the baseline phase should not be contingent on "stability." For intervention, a predetermined number of sessions are required (rather than performance-determined duration). Finally, treatment effects must be significantly better than expected by chance to be confident that the results reflect change greater than random variation. Appropriate statistical analysis is important: by-item statistical analysis methods are strongly recommended and a methodology is presented using WEighted STatistics (WEST).
Full-text available
Background: Currently, most research on the effective treatment of morphosyntax in children with specific language impairment (SLI) pertains to younger children. In the last two decades, several studies have provided evidence that intervention for older school-age children with SLI can be effective. These metalinguistic intervention approaches teach grammatical rules explicitly and use shapes and colours as two-dimensional visual support. Reading or writing activities form a substantial part of these interventions. However, some children with SLI are poor readers and might benefit more from an approach that is less dependent on literacy skills. Aims: To examine the effectiveness of a combined metalinguistic and multimodal approach in older school-age children with SLI. The intervention was adapted to suit poor readers and targeted the improvement of relative clause production, because relative clauses still pose difficulties for older children with SLI. Methods & procedures: Participants were 12 monolingual Dutch children with SLI (mean age 11;2). All children visited a special school for children with speech and language disorders in the Netherlands. A quasi-experimental multiple-baseline design was chosen to evaluate the effectiveness of the intervention. A set of tasks was constructed to test relative clause production and comprehension. Two balanced versions were alternated in order to suppress a possible learning effect from multiple presentations of the tasks. After 3 monthly baseline measurements, the children received individual treatment with a protocolled intervention programme twice a week during 5 weeks. The tests were repeated directly post-therapy and at a retention measurement 3 months later. During the intervention programme, the speech therapist delivering the treatment remained blind to the test results. Outcomes & results: No significant changes were found during the baseline measurements. However, measurement directly post-therapy showed that 5 h of intervention produced significant improvement on the relative clause production tasks, but not on the relative clause comprehension task. The gains were also maintained 3 months later. Conclusions & implications: The motor and tactile/kinesthetic dimensions of the 'MetaTaal' metalinguistic intervention approach are a valuable addition to the existing metalinguistic approaches. This study supports the evidence that grammatical skills in older school-age children with SLI can be remediated with direct intervention using a metalinguistic approach. The current tendency to diminish direct intervention for older children with SLI should be reconsidered.
Background: Children with developmental language disorder (DLD) frequently have difficulties with word learning and understanding vocabulary. For these children, this can significantly impact on social interactions, daily activities and academic progress. Although there is literature providing a rationale for targeting word learning in such children, there is little evidence for the effectiveness of specific interventions in this area for children with identified DLD. Aims: To establish whether direct one-to-one intervention for children with DLD over 9 years of age leads to improved abilities to identify, comprehend, define, and use nouns and verbs targeted in intervention as compared with non-targeted control items and whether or not the participants' rating of their own knowledge of the words changes with intervention. Methods & procedures: Twenty-five children and young people with language disorder (aged 9;4-16;1) participated in the study: 18 with DLD and seven with a language disorder associated with autism spectrum disorder (ASD). Two assessments of different levels were created: a higher ability (less frequent words) and a lower ability (more frequent words). Participants' speech and language therapists (SLTs) decided which level would be the most appropriate for each participant. Four tasks were carried out as part of the assessment and the scores were used to identify which words each participant worked on. Participants received one 30-min session per week one-to-one with their own SLT for 7 weeks, plus a 5-min revision session in between each main session. During each of the first five sessions, participants learned two new words; the two final sessions were spent revising the 10 words which had been targeted. Outcomes & results: Post-intervention assessment showed an increase in scores for both treated and control words. However, progress on treated words was significantly greater than on control words (d = 1.07), indicating effectiveness of intervention. The difference between progress on targeted and control words was found both for nouns (d = 1.29) and verbs (d = 0.64), but the effect size was larger for nouns. Whether or not the participants had an associated ASD did not affect the results. The children's self-rating of their knowledge of the targeted words was also significantly higher than for control words post-intervention. Conclusions & implications: The intervention delivered one-to-one by the participants' usual SLT was effective in teaching new vocabulary to older children with language disorders. This shows that older children with language disorders can make progress with direct one-to-one intervention focused on vocabulary.
Background: Evidence of the effectiveness of therapy for older children with (developmental) language disorder (DLD), and particularly those with receptive language impairments, is very limited. The few existing studies have focused on particular target areas, but none has looked at a whole area of a service. Aims: To establish whether for students with (developmental) language disorder attending a specialist school, 1:1 intervention with an SLT during one school term improves performance on targeted areas, compared with untreated control areas. Also, to investigate whether gender, receptive language status, autism spectrum disorder (ASD) status, or educational Key Stage affected their response to this intervention. Methods & procedures: Seventy-two students (aged 9-17 years, 88% of whom had receptive language impairments) and all speech and language therapists (SLTs) in our specialist school for children with Language Disorder, most of whom have DLD participated in this study over one school term. During this term, the SLTs devised pre- and post-therapy measures for every student for each target they planned to treat 1:1. In addition, for each target area, a control measure was devised. The targets covered a wide range of speech, language and communication areas, both receptive and expressive. Post-therapy tests were administered 'blind'. Outcomes & results: During the term, SLTs and students worked 1:1 on 120 targets, the majority in the areas of expressive and receptive language. Targets and controls did not differ pre-therapy. Significant progress was seen both on targets (d = 1.33) and controls (d = 0.36), but the targeted areas improved significantly more than the controls with a large and clinically significant effect size (d = 1.06). There was no effect of language area targeted (targets improved more than their controls for all areas). Participants with versus those without receptive language difficulties, co-occurring ASD diagnosis or participants in different educational Key Stages did not differ significantly in terms of the progress they made on target areas. Conclusions & implications: Direct 1:1 intervention with an SLT can be effective for all areas of language for older children with (D)LD, regardless of their gender, receptive language or ASD status, or age. This adds to the relatively limited evidence base regarding the effectiveness of direct SLT intervention for school-aged children with (D)LD and for children with receptive language impairments. If direct 1:1 intervention can be effective with this hard-to-treat group, it may well also be effective with younger children with (D)LD. Thus, direct SLT services should be available for school-aged children with (D)LD, including older children and adolescents with pervasive difficulties.
Background: Clinicians globally recognize as exceptionally challenging the development of effective intervention practices for bi- or multilingual children with specific language impairment (SLI). Therapy in both or all of an impaired child's languages is rarely possible. An alternative is to develop treatment protocols that facilitate the transfer of therapy effects from a treated language to an untreated language. Aims: To explore whether cognates, words that share meaning and phonological features across languages, could be used to boost lexical retrieval in the context of multilingual SLI. This is dependent on exploiting the phonological information in the one, trained language as a mechanism for (phonological) language transfer to the other, untrained languages. Methods & procedures: The participant is an 8.5-year-old girl diagnosed with SLI who showed a severe naming deficit in her three spoken languages (Bulgarian, English and Greek). She received training on cognates (n = 20) using a picture-based naming task in English only, three times a week, over a 4-week period for 20 min each time. Phonological-based naming therapy was carried out using form-based strategies. Outcomes & results: There was a significant improvement during therapy and immediately after intervention on cognate performance in English which was maintained 1 month after intervention. Cognate production in Bulgarian and Greek also improved during all stages of the intervention. Improvement in the non-treated languages was slightly more than half of the improvement recorded in English. The findings reflected some degree of cross-linguistic transfer effects. Conclusions & implications: Cross-linguistic transfer effects were evident during therapy and after therapy had finished and the effects were maintained 1 month post-treatment. Both the native language (Bulgarian) and the dominant language (Greek) benefitted equally from the treatment of cognates in English. Generalization to non-treatment words was evident, predominantly for English. The results suggest that cognates can indeed be used successfully as a WFD intervention strategy for multilingual children with SLI with lasting effects.
The success of efforts to reduce inappropriate use of medical tests and interventions may be limited by our tendency to overestimate the effect of our actions. Efforts to promote more rational medical decision making will need to address this illusion of control.
Word finding difficulties are often seen in children with language difficulties. Their problem is readily observed and has led to investigations of its nature and encouraged attempts at intervention. Semantic errors in their naming suggest that their knowledge of items is poorly developed and that therapies to strengthen it may be effective. Twelve children between 7 and 11 years of age were offered 3 hours of semantic therapy in two 15-minute sessions per week for 6 weeks. The children had severe and complex speech, language and communication needs and all were in the bottom 5% for their age on a test of word finding. Two categories of items were treated. Each category was divided into sets of items that were directly treated, and items which appeared during therapy but were not specifically targeted. Categories and sets of items were counterbalanced across children. The children were blind assessed on naming the items before and after therapy and at a maintenance assessment 6 weeks after the treatment ceased. The children improved significantly on the treated items and on untreated items from the same category but not on items from the untreated category. Improvement was maintained at the maintenance assessment. Although results were significant, only medium effect sizes were obtained. These results add to the evidence that semantic therapy can help children with word finding difficulties. In assessing their clinical significance, the severity of the children’s communication problems should be taken into account.
Michael O'Donnell: BMJ Books, £14.95, pp 208 ISBN 0 7279 1204 6This is a light hearted collection of quips, anecdotes, and aphorisms that slyly feeds the reader some serious comments on the state of medical practice and clinical research. To give a flavour of the book, here is a small selection that I found provocative.The entries are presented under somewhat arbitrary categories. Under “Faith,” which Dr O'Donnell defines as a “valuable ally in achieving a ‘cure’ and a dangerous enemy in assessing it,” he offers the following from Sir Peter Medawar: “Exaggerated claims for the efficacy of a medicament are very seldom the consequence of any intention to deceive; they are usually the outcome of a kindly conspiracy in which everybody has the very best intentions. The patient wants to get well, his physician wants to have made him better, and the pharmaceutical company would have liked to have put it into the physician's power to have made him so. The controlled clinical trial is an attempt to avoid being taken in by this conspiracy of good will.” (From Advice to a Young Scientist, published in 1979.)The clinical trial is a recurrent theme. Thus, from William Silverman's Human Experimentation: a Guided Step into the Unknown: “Ethical arguments raised when patients are to be allocated to compared treatments take one of two contradictory forms. The first contends that the group receiving standard treatment is sacrificed because they are denied the benefit of a favourable new therapy. The second expresses concern that patients allotted to an untested innovation are exposed to an unwarranted risk.” In front of me, as I write, is the 18 September issue of the New England Journal of Medicine, which damns ongoing clinical trials in third world countries in which zidovudine, intended to avert fetal transmission of HIV, is compared with a placebo. The trials are sponsored by the World Health Organisation, the Centers for Disease Control, Atlanta, and the National Institutes of Health, at each of which the design of the trials was carefully deliberated. Clearly, there are difficult, possibly insoluble ethical issues here, and I wonder whether it is ever possible to carry out clinical trials free of ethical controversy. When the first trials of polio vaccine were carried out in the United States there was not sufficient vaccine for children of every age, so the vaccine was compared with a placebo. Nevertheless, Westbrook Pegler, a widely syndicated columnist, condemned the trial for depriving children of the vaccine.