The Moderating Effect of Adherence-Promoting
Interventions With Clients on Evidence-Based
Practices for Children and Adolescents
With Mental Health Problems
Craig Schwalbe and Robin Gearing
Poor adherence of children and adolescents to evidence-based psychosocial interventions
remains a fundamental impediment to treatment effectiveness. To maintain client adher-
ence, researchers and clinicians have employed a number of adherence-promoting strate-
gies, from telephone calls and letters to providing transportation costs and child care to
motivational enhancement therapies. However, the influence of adherence promoters on
intervention outcomes has not been reported. This study examined the moderating effect
of adherence-promoting strategies in a survey and meta-analysis of randomized clinical
trials of cognitive behavioral treatments, interpersonal therapy, and psycho-education for
children and adolescents with mental health problems (k = 33). Results indicated the type
and intensity of adherence promoters’ moderated study effect sizes according to client
characteristics (age, gender, diagnosis). Preliminary findings suggest that males had higher
effect sizes when more intensive adherence-promoting efforts were employed. Adherence-
promoting efforts were associated with lower effect sizes for youths who were diagnosed
with externalizing disorders. Results of this study suggest directions for future research to
clarify clinical guidelines to maximize retention in evidence-based psychotherapy.
behavioral interventions in particular have documented small,
moderate, and large effects of these interventions dependent on
their problem focus and modality (Bennett & Gibbons, 2000;
Bradley & Mandell, 2005; Fabiano et al., 2009; Landenberger &
Lipsey, 2005; Weisz, McCarty, & Valeri, 2006; Weisz, Weiss,
Alicke, & Klotz, 1987; Weisz, Weiss, Han, Granger, & Morton,
1995). Also, an increasing number of trials document the effi-
cacy of such approaches as interpersonal psychotherapy (IPT)
and psycho-educational therapy (PE; e.g., Bolton et al., 2007;
King et al., 2006). However, despite annual investments of mil-
lions of dollars to develop and disseminate these interventions,
poor client adherence remains a fundamental impediment to
treatment effectiveness (Armbruster & Fallon, 1994; Armbruster
& Kazdin, 1994; Lefforge, Donohue, & Strada, 2007; Nock &
Kazdin, 2005; Pekarik, 1985). Over one-third of children and
adolescents drop out or terminate prematurely from mental
health psychosocial treatment (Clarkin & Levy, 2004; Garfield,
1994; Kazdin, 2000, 2008; Kazdin & Mazurick, 1994; Olfson
vidence for the efficacy of cognitive-behavioral, interper-
sonal, and psycho-educational interventions for child and
adolescent mental health disorders is growing. Cognitive-
et al., 2009; Swift, Callahan, & Levine, 2009; Weersing & Weisz,
2002; Wierzbicki & Pekarik, 1993), and homework completion
rates tend to diminish over the course of treatments (Gaynor,
Lawrence, & Nelson-Gray, 2006). Poor attendance and declin-
ing participation in treatment protocols undermine intervention
effects (Armbruster & Kazdin, 1994; Lefforge et al., 2007; Nock
& Kazdin, 2005; Pekarik, 1985).
Adherence has long been recognized as an essential ingredient
for intervention success (Nock & Ferriter, 2005), and adher-
ence-promoting interventions have been developed to support
client adherence (Gearing, Schwalbe, Dweck, & Berkowitz, in
press). Adherence promoter strategies include between-session
communication contacts (e.g., telephone calls, letters, and
texting), motivational enhancement therapies (e.g., pretreatment
motivational interviewing), ongoing therapeutic processes (e.g.,
problem solving, direct encouragement), and concrete strategies
(e.g., parking, child care, and food), among others. Yet, inter-
vention developers tend to overlook their own active strategies
for adherence promotion in their reporting of research results
and have rarely reported adherence-promoting strategies in their
intervention manuals. Perhaps, for these reasons, the effects of
adherence-promoting interventions have not been incorporated
into the search for clinically meaningful moderators of treat-
ment effects using meta-analyses.
This study introduced an innovative approach to moderator
analysis in meta-analysis. The study systematically identified
Correspondence concerning this article should be addressed to Craig
Schwalbe, School of Social Work, Columbia University, 1255 Amster-
dam Ave., New York, NY 10027. Electronic mail may be sent to
American Journal of Orthopsychiatry
2012, Vol. 82, No. 1, 146–155
? 2012 American Orthopsychiatric Association
cal trials (RCT) of evidence-based practice interventions for child-
hood and adolescent mental health problems and incorporated
these data into a meta-analysis of evidence-based interventions for
childhood and adolescent mental health problems. The goals of
this study are to examine the moderating influence of adherence
promotion on overall intervention effects and to identify client
characteristics and intervention characteristics that moderate the
Adherence to Evidence-Based Interventions
Adherence refers to the client’s postenrollment participation
in psychotherapy protocols (Gearing et al., in press). While
research occasionally examines adherence through homework
completion or through the depth of engagement in treatment
(Gaynor et al., 2006; Nock & Kazdin, 2005), adherence is most
often operationalized as retention in treatment, session atten-
dance, and treatment completion rates. In general, mental health
psychosocial intervention studies report dropout rates ranging
from as low as 25% to as high as 75% for children and adoles-
cents (Kazdin, Mazurick, & Siegel, 1994; Lai, Pang, Wong,
Lum, & Lo, 1998; Pekarik & Stephenson, 1988; Pina, Silverman,
Weems, Kurtines, & Goldman, 2003; Wierzbicki & Pekarik,
1993). And despite their rigor, published RCTs of well-estab-
lished evidence-based practices (EBP) report dropout rates as
high as 46% (Pina et al., 2003). While analytic methods involv-
ing intent-to-treat analyses have been developed to statistically
control for treatment dropout on ultimate outcomes, such meth-
ods do not provide direction to clinicians about how to maxi-
mize attendance and client retention.
Research suggests that adherence as indicated by treatment
retention may be influenced by a number of factors (Daley &
Zuckoff, 1999; Geffken, Keeley, Kellison, Storch, & Rodrigue,
2006). For instance, studies have demonstrated that parental
attitudes and expectations toward treatment influence adherence
(Richardson, 2001; Taylor & Stansfield, 1984). Severity of psy-
chopathology, in both children and parents, may exert an effect
on adherence, but the direction of this influence is mixed. Where
some research has found that severe psychopathology reduced
adherence to treatment (Killaspy, Banerjee, King, & Lloyd,
2000; King, Hovey, Brand, Wilson, & Ghaziuddin, 1997), other
investigations have found that more severe youth psychiatric
symptoms increased adherence (Rotheram-Borus et al., 1999).
Finally, common psychosocial conditions and circumstances can
create barriers to adherence, including financial limitations,
problems with access, child care needs, transportation difficul-
ties, and competing time demands. Interestingly, research with
children who had received recommendations for treatment fol-
lowing a psychological evaluation found that the number of
barriers, rather than the presence of any single barrier,
decreased adherence (MacNaughton & Rodrigue, 2001).
Across the general health care sector, adherence promoters
have been developed to overcome adherence barriers. In general,
adherence promotion strategies may fall into one of the three
following categories: outreach, concrete, and motivational
enhancement (Gearing et al., in press). Outreach promoters are
contacts that take place outside of formal intervention sessions
(Grote, Zuckoff, Swartz, Bledsoe, & Geibel, 2007; Kazdin,
Holland, & Crowley, 1997; Shivack & Sullivan, 1989). Often,
they are communication reminders, either traditional outreach
contacts (e.g., telephone calls, letters) or, to a lesser extent,
technological outreach contacts (e.g., texting and emails) that
provide information about the intervention. Concrete promoters
are direct tangible goods or services that facilitate participation
or that address recognizable obstacles. Examples include paying
travel expenses, providing food or child care, and scheduling
evening appointments to increase client convenience (Burns,
Cortell, & Wagner, 2008; Miranda, Azocar, Organista, Dwyer,
& Areane, 2003). Finally, motivational enhancement interven-
tions (MET) influence clients’ positive expectations for treat-
ment (Bonner & Everett, 1986; Katz et al., 2004; McKay et al.,
2004; Miller & Rollnick, 2002; Nock & Kazdin, 2005; Steinberg,
Ziedonis, Krejci, & Brandon, 2004). Examples include pre-
treatment motivational enhancement interviewing, ongoing ther-
apeutic strategies such as problem solving and encouragement,
and the use of financial or other tangible incentives to encourage
treatment participation. Whereas the effect of outreach and con-
crete promoters traditionally emphasizes retention and session
attendance, MET address both attendance and treatment partic-
ipation (e.g., completion of homework).
The literature provides limited direction about the use of
promoters to increase adherence in psychosocial interventions.
Haynes, Ackloo, Sahota, McDonald, and Yao (2008) examined
the effectiveness of adherence-promoting interventions for
patient adherence to medication. Their findings suggested that
multipronged approaches emphasizing frequent contact through
letters, phone calls, and other forms of contact increased adher-
ence to treatment. Bosch-Capblanch, Abba, Prictor, and Garner
(2007) found that the use of adherence contracts between health
care providers and patients increased early session attendance,
but that the effects of these interventions waned over time.
These reviews indicated the need for sustained efforts using mul-
tiple strategies to promote adherence throughout the course of
treatment. However, neither of these studies was specific to the
problem of adolescent adherence, nor did they answer questions
of critical clinical concern to practitioners in the field. Specifi-
cally, with what intensity should specific promoters (i.e., out-
reach, concrete, motivational) be applied to a given population
(e.g., gender, diagnosis, age, and race) under planned interven-
tion characteristics (e.g., intervention modality, number of ses-
sions)? When the moderating effects of these child and
intervention characteristics on adherence promotion and on
intervention effectiveness are understood, then practitioners
who employ evidence-based interventions in practice will be
able to systematically individualize their intervention efforts to
maximize and sustain client adherence.
Moderators and Mediators in Meta-Analysis
Meta-analysis is increasingly employed to answer questions
about the moderating effects of client, intervention, and study
characteristics on intervention outcomes. Beginning with the
landmark meta-analyses of Weisz et al. (1987, 1995), research
has shown that psychosocial interventions are generally effective
for childhood and adolescent mental health conditions. These
meta-analyses establish benchmarks of small to moderate effects
using Cohen’s (1988) rubric for interpreting effect sizes. They
showed that effect sizes for behavioral interventions were higher
than those of nonbehavioral interventions; effects for female
adolescents were larger than for male adolescents, but that the
effects for children did not vary by gender; overall, the effects
were larger for adolescents than for children and that the effects
were larger when paraprofessionals were providing treatment
for externalizing disorders and when professionals were provid-
ing treatment for internalizing disorders. Subsequent meta-anal-
yses have sought to identify intervention effects of cognitive-
behavioral interventions for specific mental health problems.
For instance, small to moderate effects on problems such as
depression (Chu & Harrison, 2007; Weisz et al., 2006) and anti-
social behaviors (Bennett & Gibbons, 2000; Landenberger &
Lipsey, 2005) have been reported. Larger effects have been
reported for anxiety disorders (Chu & Harrison, 2007; In-Albon
& Schneider, 2007; Ishikawa, Okajima, Matsuoka, & Sakano,
2007; Prins & Ollendick, 2003), behavioral treatment for atten-
tion deficit hyperactive disorders (ADHD; Fabiano et al., 2009),
self-regulation treatment for ADHD (Reid, 2005), and com-
bined treatment for ADHD (Majewicz-Hefley & Carlson, 2007).
However, moderator analysis is constrained by the availabil-
ity and quality of data included in research reports. While pub-
lished reportsof intervention
standardized in accordance with the consolidated standards of
reporting trials statement (CONSORT; Moher, Schulz, & Alt-
man, 2001), unstandardized reporting remains prevalent. For
instance, Fabiano et al.’s (2009) analysis of behavioral treat-
ments for ADHD noted this limitation in their study pool.
Some studies have been able to include moderator analysis of
intervention processes with limited success. Landenberger and
Lipsey’s (2005) meta-analysis of cognitive-behavioral treatment
for antisocial behaviors stands out in this regard. This study
showed that interventions including interpersonal problem solv-
ing and anger control led to lower rates of repeated delinquency
among delinquent adolescents. Despite the clear clinical implica-
tions of such findings, the search for clinically meaningful mod-
erators using meta-analysis faces the limitations of the available
data. In particular, adherence-promoting strategies are unre-
ported in most published research reports.
This study sought to extend the utility of meta-analysis to
identify adherence-related moderators of treatment effectiveness
for children and adolescents with mental health problems. In
this study, a comprehensive literature search for peer-reviewed
RCT encompassing three evidence-based interventions (i.e., cog-
nitive-behavioral therapy [CBT], IPT, PE) was supplemented by
a survey of adherence-promoting interventions completed by the
principal authors for each report. Data from the survey were
used in conjunction with a traditional meta-analysis to answer
two research questions: (a) Do adherence-promoting interven-
tions moderate the effect of evidence-based practices on out-
comes? and (b) Does the moderating effect of adherence-
promoting interventions interact with child characteristics?
Eligible research reports were identified through an electronic
search of MEDLINE and PsycINFO databases that met the fol-
lowing inclusion criteria: (a) employed an experimental RCT
design; (b) investigated one of the three EBP interventions for
diagnosed mental health conditions, issues, or problems (CBT,
IPT, PE); (c) studied children and adolescents (18 years of age
and under); (d) reported sufficient information to calculate
pretest–posttest effect sizes; and (e) published findings between
January 2000 and April 2008 in an English language peer-
reviewed journal. Primary prevention studies were excluded.
The time frame was selected to facilitate recall for survey
completion as discussed later. Search keywords included the
following: cognitive-behavioral therapy; cognitive behavior ther-
apy; cognitive therapy; CBT; interpersonal psychotherapy; IPT;
psychoeducation; psychoeducational; PE; family psychoeduca-
tion; multiple family group; and family-focused treatment.
Authors of eligible research reports were invited to complete
a web-based survey about their use of adherence-promoting
strategies in their published research report. An invitation letter
with the online survey link was sent to the contact author of eli-
gible research reports (usually the first author). If the electronic
contact information was incorrect, further attempts were made
to search online for alternative contact information. In studies
with more than one author, contact information was sought for
the next author(s), if no contact information could be found for
the first author. The total eligible sample was 68. Three contacts
were attempted. All potential respondents who entered the
online survey portal were eligible to enter a drawing for one of
the five $50 bookstore gift certificates. Thirty-four participants
completed the study protocols.
Research reports were coded by the authors and two graduate
research assistants (GRA) who were trained and supervised by
the authors. Following training, reliability tests were conducted
twice, prior to formal coding and at the midpoint of the
coding period. Interrater agreement in both tests exceeded 90%.
Coding difficulties and challenges were resolved in weekly
face-to-face supervision of the GRAs with the authors. At the
conclusion of the coding period, the authors double checked
codes for all effect sizes. The following data were extracted from
Sample and study characteristics.
sample size, average age of study participants, gender (propor-
tion male), and primary diagnosis. Outcome measures were clas-
sified into five categories: mental health (e.g., diagnostic
measures and symptom scales), functioning (e.g., global func-
tioning and social functioning), cognitive (e.g., schema and
attributions), parent related (e.g., parenting practices), and child
health (e.g., global health measures). Attendance rates were
coded directly from research reports when provided or were
calculated based on the average number of sessions attended
divided by the total number of sessions offered.
ference of intervention pretest and posttest scores divided by the
standard deviation of the pretest scores. In cases in which
pretest or posttest scores were not presented, effect sizes were
calculated from F statistics and p-values. Effect sizes were
Effect sizes were calculated by taking the dif-
SCHWALBE AND GEARING
averaged across all CBT, IPT, and PE treatment conditions and
outcomes to yield a single effect size for each study. Effect sizes
were not calculated for control conditions, nor were they calcu-
lated for drug-only experimental conditions, because these
groups would not have received the adherence-promoting inter-
ventions that were the focus of this meta-analysis. Effect sizes
were aggregated across active treatment conditions, because sur-
vey respondents were not asked to differentiate survey responses
by intervention condition.
An 85-question survey was administered to respondents
through SurveyMonkey (http://www.surveymk.com), an online
survey portal. The survey requested detailed information about
the use of outreach adherence promoters (i.e., telephone con-
tacts, mailed contacts, and electronic contacts), concrete adher-
ence promoters (i.e., child care, meals, scheduling convenience
and travel assistance), and motivational adherence promoters
(i.e., direct incentives, pretreatment motivational enhancement,
and ongoing interventionstrategies).
endorsed any adherence promoters, survey prompts requested
further information including the frequency with which the
adherence promoter was employed, staff time involved in the
promoter, staff qualifications, and time in training and supervi-
sion. A subsection of the survey is included in the Appendix; the
full survey is available from the authors.
From the survey, two summary measures of adherence-pro-
moting intensity were derived: total promoters and promoter
frequency. Total promoters is the count of the number of types
of adherence-promoting interventions employed during the
course of a study ranging from 0 to 6. Promoter frequency is the
average number of adherence promoters employed per session.
For each promoter endorsed by survey respondents, respon-
dents were asked to identify in what proportion of sessions the
promoter was employed (i.e., 25% or less, 50% or less, 75% or
less, or 100%). The proportions across all adherence promoters
were summed to provide an estimate of frequency in which the
promoters were employed. Promoter frequency ranged from
0.25 to 6, in which the top score indicates that, on average, six
promoters were used during each session and the bottom score
indicates that 0.25 promoters were used during each session.
Hunter and Schmidt (2004) random effects models were
employed to calculate inverse-variance weighted effect sizes, to
correct standard errors for sampling error variance, and to
calculate credibility intervals. Heterogeneity statistics indicated
high levels of variability (k = 68, Q = 635.02, p < .001,
I2= 0.89), which justified the random effects models but also
suggested the presence of potential outliers. Outliers were
detected using the sample-adjusted meta-analytic deviancy sta-
tistic (SAMD; Winfred, Winston, & Huffcutt, 2001). The
SAMD statistic, calculated for each research report, is a mea-
sure of change in average effect size adjusted for sample size
when the research report is excluded from the analysis.
SAMD statistics follow an approximate t distribution. Follow-
ing Winfred et al., scree plots were used to identify potential
outliers. The moderating effect of sample characteristics, study
characteristics, and adherence promoters was assessed by cal-
culating effect sizes for different levels of the potential moder-
ators. Hunter and Schmidt recommended data subsetting over
multivariate approaches to moderator analysis to avoid capi-
talizing on chance in meta-analyses with low statistical power.
Categorical variables were classified into two categories (e.g.,
primary diagnosis: internalizing disorder or externalizing disor-
der), whereas median splits were used for continuous vari-
ables. In this approach, two tests indicate the presence of
moderation: effect size differences between subgroups and
reductions in unexplained variance (I2).
The search yielded 68 eligible studies. Of this number, 34
respondents (50%) completed the surveys. The survey subsam-
ple (k = 34) did not differ from the overall sample in sample
demographics, diagnostic characteristics, or intervention charac-
teristics. As shown in Table 1, about 70% of the survey subsam-
ple studies targeted internalizing disorders (affective disorders
and anxiety disorders), whereas approximately a quarter of
these studies targeted externalizing disorders (conduct disorders,
oppositional defiant disorders, attention deficit⁄hyperactivity
disorders, and substance use disorders). Nearly all studies tested
CBT interventions (91%), and a majority involved group inter-
ventions (65%). The number of outcomes measured in each
study ranged from 1 to 15 (median = 5). The number of mental
health outcomes ranged from 0 to 11 (median = 3), and func-
tioning outcomes ranged from 0 to 7 (median = 1).The average
effect size for all outcomes was large (d = 1.22, k = 34) but
with unacceptably high heterogeneity (v2= 382.8, I2= 0.91).
Scree plots of SAMD statistics (SAMD > 4) suggested the
presence of one outlier that was subsequently omitted from fur-
ther analysis. Over the remaining 33 studies, the average effect
Table 1. Characteristics of Studies
(k = 68)
(k = 34)
Sample demographics (M, SD)
Primary diagnosis (k, %)
Intervention characteristics (n, %)
Note. CBT = cognitive-behavioral
psychotherapy; PE = psycho-educational.
therapy;IPT = interpersonal
size was d = 1.03 (k = 33) with much reduced heterogeneity
(v2= 93.8, I2= 0.66).
Table 2 shows effect sizes for the full sample (k = 65), the
survey subsample (k = 33), and for subgroups of potential
moderating variables. Studies with higher proportions of female
clients and adolescents had higher effect sizes than studies with
larger proportions of males and younger clients. Effect sizes for
interventions targeting internalizing disorders were higher than
between attendance rate and effect size was also examined. As
only 13 studies reported sufficient information to calculate aver-
age attendance rates (M = 0.81, SD = 0.16), further analysis
was not conducted with this variable. Other moderating vari-
ables examined included CBT studies (k = 27, d = 0.93;
I2= 46) versus non-CBT studies (k = 6, d = 1.22; I2= 87);
large sample sizes (n > 55; k = 17, d = 0.94; I2= 0.75) versus
small sample sizes (k = 16, d = 1.10; I2= 0.49); and fewer
(< 5) outcome measures (k = 16, d = 1.22; I2= 0.75) versus
more (‡ 5) outcome measures (k = 17, d = 0.81; I2= 0.19).
The most common adherence promoter reported by survey
respondents was phone promoters (k = 20, 61%), followed by
concrete promoters (k = 19, 58%). Use of ongoing promoters
was reported by 14 respondents (47%) and MET were reported
by 13 (42%). Least often used promoters included incentives
(k = 7, 21%), mail promoters (k = 6, 18%), and technology-
assisted promoters (k = 1, 3%). On average, respondents
reported using nearly three types of promoters (M = 2.9,
SD = 4.3) and reported using more than one promoter on aver-
age per session (M = 1.3, SD = 3.3). The majority of respon-
dents reported that promoters targeted both parents and
children (19 of 33, 57.6%) with the remaining studies targeting
either youths (k = 7, 21%) or parents (k = 7, 21%). As
expected, promoters targeting youth only were limited to studies
with older children (> 11 years old), whereas promoters target-
ing parents only were limited to studies with younger children
(< 11 years old).
Table 3 shows the associations between adherence promoters
and average effect sizes. Promoter intensity measures (number
of promoters reported and frequency of adherence promoters)
were dichotomized at their median values into low and high cat-
egories, and average effect sizes were calculated for each group.
Average effect sizes were also calculated for the six individual
adherence promoters. Across these analyses, average effect sizes
for most adherence promoters approximated the overall average
effect size (d = 1.03) and heterogeneity remained virtually
unchanged. The exception was for mailed promoters, which had
an average effect size that was 41% less than studies that did
not employ mailed promoters (d = 0.64 vs. 1.09). Despite this
reduction, heterogeneity did not decrease (I2= 0.66) compared
with the heterogeneity base rate.
The next analysis compared effect sizes for the adherence-pro-
moting interventions across client characteristics (see Table 4).
The gender distribution of the samples was dichotomized using
a median split (median = 49% male) into predominantly male
samples (M = 71% male) and predominantly female samples
(M = 73% female). Compared with predominantly female sam-
ples, predominantly male samples appeared to benefit more
from active adherence-promoting efforts. Studies with predomi-
nantly male samples had higher effect sizes when they employed
more adherence promoters (31% higher; d = 0.89 vs. 0.68;
I2= 0.25), when they employed adherence promoters more fre-
quently (28% higher; d = 0.91 vs. 0.71; I2= 0.26), and when
they employed telephone promoters (19% higher; d = 0.87 vs.
0.73; I2= 0.36). Both males and females benefited from ongo-
ing therapeutic promoters although the benefit to males (82%
higher; d = 1.00 vs. 0.55; I2= 41) was stronger than for
females (28% higher; d = 1.42 vs. 1.11; I2= 0.52). In each
comparison, reductions in heterogeneity accompanied by effect
size difference provided evidence for the interaction of gender
with adherence promotion.
Studies were also dichotomized at the median age (11.2 years
old) to describe predominantly older (M = 14.7) and predomi-
nantly younger (M = 9.9) samples. Studies reporting the effects
of MET promoters with younger youths had higher effect
sizes (34% higher; d = 0.82 vs. 0.61; I2= 0.35) as did stud-
ies that reported employing ongoing therapeutic promoters
Table 2. Effect Sizes for Different Subsamples and Outcomes
(M = 71%); Female = samples with 49% or more female participants
(M = 73%); Young = samples with average age < 11.3 (M = 9.9);
Old = samples with average age > 11.3 (M = 14.7).
Male = sampleswithmorethan49%maleparticipants
Table 3. Effect Sizes by Adherence Promoter Intensity and by
Individual Adherence Promoters
Number of promoters
Frequency of promoters
Individual adherence promoters
Note. High total number of promoters ‡ 3 (Mlow= 1.6, Mhigh= 3.9);
highfrequencyofpromoters > 1.14
Mhigh= 2.1). MET = motivational enhancement interventions.
SCHWALBE AND GEARING
(38% higher; d = 0.83 vs. 0.60; I2= 0.59). Studies with
predominantly older youths that reported using concrete pro-
moters also had larger effect sizes (28% higher; d = 1.40 vs.
1.09; I2= 0.23). Adherence promoters were not associated with
lower effect sizes for either younger or older children.
Adherence promoters were associated with both higher and
lower effect sizes for studies of internalizing and externalizing
disorders, although reductions in heterogeneity were uneven,
suggesting less robust findings. Studies of internalizing disorders
had higher effect sizes when higher numbers of promoters were
employed (20% higher; d = 1.16 vs. 0.97; I2= 0.54) and when
concrete promoters were used (41% higher; d = 1.20 vs. 0.85;
I2= 0.38) but had lower effect sizes when MET promoters were
reported (30% lower; d = 0.89 vs. 1.27; I2= 0.07). Studies of
externalizing disorders had higher effect sizes with MET pro-
moters (91% higher; d = 1.09 vs. 0.57; I2= 0.70) and ongoing
I2= 0.00). However, these same studies reported lower effect
sizes when the number of promoters was ‡ 3 (53% lower;
d = 0.59 vs. 1.04; I2= 0.60), when promoters were employed
more frequently (30% lower; d = 0.69 vs. 0.99; I2= 0.66),
d = 0.56 vs. 1.21; I2= 0.52), and when concrete promoters
were reported (30% lower; d = 0.69 vs. 0.99; I2= 0.66).
higher;d = 1.18 vs.0.73;
The purpose of this study was to examine the moderating
effect of adherence-promoting strategies on intervention effec-
tiveness with children and adolescents and to identify client
characteristics that moderated that effect. Overall, the results
seem to indicate that adherence-promoting strategies have little
impact on effect sizes. However, meaningful patterns emerged
when the results were disaggregated by client characteristics.
While tempered by study limitations discussed later, these find-
ings are suggestive of specific directions for future research
aimed toward the development of clinical guidelines for sustain-
ing client participation in treatment.
Three patterns of findings emerged when considering moder-
ating effects of client characteristics on adherence promotion.
The first is a positive association between adherence-promoting
efforts and outcome effect sizes for some groups. For instance,
males benefited from more intensive adherence promotion
efforts, phone promoters, and in-session therapeutic strategies
to sustain adherence, whereas children and adolescents diag-
nosed with internalizing disorders benefitted from more inten-
sive adherence promotion, phone promoters, and concrete
promoters. These findings are consistent with the hypothesis
that adherence-promoting interventions support intervention
outcome effects. For clinical management, these findings suggest
that targeted use of adherence promoters with select popula-
tions, males and youths with internalizing disorders in the pres-
ent case, can improve intervention outcomes.
The second pattern finds a negative association between cer-
tain adherence-promoting efforts and outcome effect sizes
within specific populations. Results for externalizing disorders
are prototypical of this pattern. For this population, more inten-
sive use of adherence-promoting interventions was associated
with consistently smaller effect sizes. The deleterious impact of
adherence-promoting efforts may be one explanation for these
findings and would argue against their use. However, more
plausible alternative explanations emerge when considering the
considerable treatment challenges posed by children and adoles-
cents with externalizing disorders. Retention problems with
these children may have led to the reactive use of adherence pro-
moters. That is, threats to adherence may have caused adher-
ence-promoting efforts, as when telephone or mailed reminders
are provided to clients who miss an appointment. In effect,
intensive adherence promotion efforts may be indicative of
treatment-resistant populations, which would also show up as
Table 4. Magnitude of Effect Size Differences for Adherence Promoters Within Client Characteristics
Higher effect sizesLower effect sizes
Ongoing (28% greater; k = 6; d = 1.42)
Higher number (31% greater; k = 10; d = 0.89)
Higher frequency (28% greater; k = 9; d = 0.91)
Phone (19% greater; k = 10; d = 0.87)
Ongoing (82% greater; k = 8; d = 1.00)
Phone (24% lower; k = 10; d = 1.04)
YoungerMET (34% greater; k = 9; d = 0.82)
Ongoing (38% greater; k = 9; d = 0.83)
Concrete (28% greater; k = 10; d = 1.40) Older
InternalizingHigher number (20% greater; k = 12; d = 1.16)
Concrete (41% greater; k = 13; d = 1.2)
MET (91% greater; k = 5; d = 1.09)
Ongoing (62% greater; k = 3; d = 1.18)
MET (30% lower; k = 8; d = 0.89)
ExternalizingHigher number (43% lower; k = 5; d = 0.59)
Higher frequency (30% lower; k = 6; d = 0.69)
Phone (54% lower; k = 6; d = 0.56)
Concrete (30% lower; k = 6; d = 0.69)
Note. Only differences of 19% or greater are shown. MET = motivational enhancement interventions.
smaller overall effect sizes in meta-analysis. In other words, this
effect may be one of the reverse causalities. As adherence pro-
moters were not experimentally manipulated, the causal rela-
tions between adherence promoters and effect sizes cannot be
fully teased out of the data.
A third pattern was one of the no effects. For analysis of
adherence-promoting intensity, this pattern was shown for
females and age. However, even for these groups, individual
promoters were associated with variation in overall effect sizes.
For instance, use of phone promoters for females and predomi-
nantly adolescent samples was associated with lower effect sizes,
perhaps providing an additional example of the reverse causality
When considered separately, the findings on individual
adherence promoters were varied, nuanced, and complex. The
associations of effect sizes with three adherence-promoting tac-
tics—telephone contacts, concrete promoters, and pretreatment
motivational enhancement—were dependent upon client char-
acteristics. The reasons for these variations are difficult to
ascertain from the data because small sample sizes precluded
further examinations of more refined categories (older males
compared with older females, for instance). In some popula-
tions, individual adherence promoters like telephone contacts
may have been used contingently, based on the emergence of
adherence problems in line with the reverse causality hypothesis
advanced earlier. In other populations, adherence promoters
may simply have had differential effects. For example, it is
noteworthy that MET was associated with larger effect sizes
for children and adolescents with externalizing disorders, a
population that may be especially well suited to this type of
The adherence-promoting strategy associated with larger
effect sizes across several client categories was ongoing thera-
peutic promoters. These promoters, described by survey respon-
dents, included direct encouragement and problem solving when
obstacles to client adherence were identified (Gearing et al., in
press). Of note in these findings was the magnitude of the effects
of ongoing therapeutic strategies for males and for youths with
externalizing disorders. These findings point to the importance
of within-session activities and highlight clinical processes as a
worthwhile focus for adherence promotion.
The competing hypotheses highlighted here that (a) adherence
promoters tend to increase positive effects, and (b) the promoters
may signal clinical challenges and treatment resistance, point to a
limitation of the current study: The study does not identify how
promoters were deployed. Overall, promoters can be deployed
reactively after adherence problems emerge or proactively in an
effort to preempt adherence problems and to sustain adherence
for all children and adolescents. While the survey elicited qualita-
tive descriptions of the various adherence-promoting strategies
from respondents, the modest response rate suppressed the effec-
tive sample size for the current study, limiting the value of these
data. As a consequence, more fine-grained analysis could not be
undertaken to identify clinically meaningful groups for whom
adherence promoters were more or less effective. Furthermore,
the low statistical power tempers the conclusions suggested by the
data, whichawait additional study andanalysis.
Additional limitations of the study warrant consideration.
Notably, sampling bias may have been introduced by the
low-response survey rate. Unknown factors associated with
respondent decision to complete the survey may have reduced
the generalizability of the study. Also, retrospective reporting of
adherence promotion strategies via a survey may have intro-
duced unknown measurement error. Further, the study was
unable to disentangle the effects of adherence promoters on
actual client adherence, whether indexed by attendance rates or
by indicators of client participation such as homework comple-
tion, because research reports did not include these indicators of
adherence. And, finally, our strategy of using median splits to
test for moderation, although employed as a consequence of our
small sample size, likely obscured more subtle patterns. Thus,
the findings of this study should be considered exploratory and
should not be used to guide treatment decisions with individual
Directions for Research
Findings of this study point to four directions for research
into strategies for increasing child and adolescent adherence
to treatment. First, results of the present study echo adher-
ence research in allied fields that highlight the need for sus-
tained adherence-promoting efforts over the full course of the
intervention. However, the dosage required has not been
established. To the degree that an effective dose of adherence-
promoting efforts may vary across diagnosis, age, gender, and
other client characteristics, studies are needed that examine
adherence promotion specifically with these client features in
mind. Second, the results of this study suggest targeting
adherence-promoting efforts to children and adolescents who
are at heightened risk of adherence problems and thereby
may benefit most from active efforts toward adherence pro-
motion. Experimental research with higher risk youths will
help to clarify the reverse causality hypothesis raised in this
study. Third, how adherence-promoting interventions can be
targeted to help clients overcome specific barriers, be they
cognitive, concrete, or social, needs to be examined. For some
groups, adherence promoters may be most effective when sev-
eral types (e.g., phone, mail, technology assisted) are bundled
together and when they are carefully constructed to minimize
the threat posed by discrete risk factors such as real or per-
ceived barriers to treatment and health beliefs about the bene-
fit of treatment. Systematic targeting of adherence promoters
will not only increase efficacy of such strategies but will maxi-
mize intervention cost-effectiveness. Finally, these results sug-
gest the need to test the relative effectiveness of proactive
versus reactive adherence promoters. It appears that adherence
efforts may be ad hoc and reactive in many cases. For
instance, research documents how letters and phone calls are
often structured for clients who have missed appointments.
Infrequently tested are the effect of telephone calls, letters,
and other outreach efforts as a preventive intervention to sus-
tain the participation of youths and their families in the treat-
ment for mental health problems.
SCHWALBE AND GEARING
Keywords: children; adolescents; adolescent mental health;
evidence-based psychotherapy; client adherence; adherence-
Armbruster, P., & Fallon, T. (1994). Clinical, sociodemographic, and
systems risk factors for attrition in a children’s mental health clinic.
American Journal of Orthopsychiatry, 64, 577–585.
Armbruster, P., & Kazdin, A. E. (1994). Attrition in child psychother-
apy. Advances in Clinical Child Psychology, 16, 81–109.
Bennett, D. S., & Gibbons, T. A. (2000). Efficacy of child cognitive-
behavioral interventions for antisocial behavior: A meta-analysis.
Child and Family Behavior Therapy, 22, 1–15.
Bolton, P. B., Bass, J., Betancourt, T., Speelman, L., Onyango, G.,
Clougherty, K. F., . . . Verdeli, H. (2007). Interventions for depression
symptoms among adolescent survivors of war and displacement in
northern Uganda: A randomized controlled trial. JAMA, 298, 519–
Bonner, B. L., & Everett, F. L. (1986). Influence of client preparation
and problem severity on attitudes and expectations in child
psychotherapy. Professional Psychology: Research and Practice, 17,
Bosch-Capblanch, X., Abba, K., Prictor, M., & Garner, P. (2007). Con-
tracts between patients and healthcare practitioners for improving
patients’ adherence to treatment, prevention and health promotion
activities. Cochrane Database of Systematic Reviews, 2007(2), Art.
No.: CD004808. doi: 10.1002/14651858.CD004808.pub3
Bradley, M. C., & Mandell, D. (2005). Oppositional deviant disorder: A
systematic review of evidence of intervention effectiveness. Journal of
Experimental Criminology, 1, 343–365.
Burns, C. D., Cortell, R., & Wagner, B. M. (2008). Treatment compli-
ance in adolescents after attempted suicide: A 2-year follow-up study.
Journal of the American Academy of Child & Adolescent Psychiatry,
Chu, B. C., & Harrison, T. L. (2007). Disorder-specific effects of CBT
for anxious and depressed youth: A meta-analysis of candidate media-
tors of change. Clinical Child and Family Psychology Review, 10, 352–
Clarkin, J. F., & Levy, K. N. (2004). Influence of client variables on
psychotherapy. In M. J. Lambert (Ed.), Handbook of psychother-
apy and behavior change (5th ed., pp. 194–226). New York, NY:
Cohen, J. (1988). Statistical power analysis for the behavioral sciences
(2nd ed.). Hillsdale, NH: Lawrence Erlbaum Associates.
Daley, D. C., & Zuckoff, A. (1999). Improving treatment compliance:
Counseling and systems strategies for substance abuse and dual disor-
ders. Center City, MN: Hazelden.
Fabiano, G. A., Pelham, W. E., Jr., Coles, E. K., Gnagy, E. M., Chro-
nis-Tuscano, A., & O’Connor, B. C. (2009). A meta-analysis of
behavioral treatments for attention-deficit⁄hyperactivity disorder.
Clinical Psychology Review, 29, 129–140.
Garfield, S. L. (1994). Research on client variables. In S. L. Garfield &
A. E. Bergin (Eds.), Handbook on psychotherapy and behavior change
(4th ed., pp. 190–228). New York, NY: Wiley.
Gaynor, S. T., Lawrence, P. S., & Nelson-Gray, R. O. (2006). Measur-
ing homework compliance in cognitive-behavioral therapy for adoles-
cent depression: Review, preliminary findings, and implications for
theory and practice. Behavior Modification, 30, 647–672.
Gearing, R. E., Schwalbe, C. S., Dweck, P., & Berkowitz, J. (in press).
Investigating adherence promoters in evidence-based mental health
interventions with children and adolescents. Community Mental
Geffken, G. R., Keeley, M. L., Kellison, I., Storch, E. A., & Rodrigue,
J. R. (2006). Parental adherence to child psychologies’ recommenda-
tions from psychological testing. Professional Psychology: Research
and Practice, 37, 499–505.
Grote, N. K., Zuckoff, A., Swartz, H., Bledsoe, S. E., & Geibel, S.
(2007). Engaging women who are depressed and economically disad-
vantaged in mental health treatment. Social Work, 52, 295–308.
Haynes, R. B., Ackloo, E., Sahota, N., McDonald, H. P., & Yao, X.
(2008). Interventions for enhancing medication adherence. Cochrane
Database of Systematic Reviews, 2008(2) Art. No.: CD000011. doi:
Hunter, J. E., & Schmidt, F. (2004). Methods of meta-analysis (2nd ed.).
Thousand Oaks, CA: Sage.
In-Albon, T., & Schneider, S. (2007). Psychotherapy of childhood anxi-
ety disorders: A meta-analysis. Psycotherapy and Psychosomatics, 76,
Ishikawa, S., Okajima, I., Matsuoka, H., & Sakano, Y. (2007). Cognitive
behavioural therapy for anxiety disorders in children and adolescents:
A meta-analysis. Child and Adolescent Mental Health, 12, 164–172.
Katz, E. C., Brown, B. S., Schwartz, R. P., Weintraub, E., Barksdale,
W., & Robinson, R. (2004). Role induction: A method for enhancing
early retention in outpatient drug-free treatment. Journal of Consult-
ing and Clinical Psychology, 72, 227–234.
Kazdin, A. E. (2000). Psychotherapy for children and adolescents: Direc-
tions for research and practice. New York, NY: Oxford University
Kazdin, A. E. (2008). Evidence-based treatments and delivery of psycho-
logical services: Shifting our emphases to increase impact. Psychologi-
cal Services, 5, 201–215.
Kazdin, A. E., Holland, L., & Crowley, M. (1997). Family experience of
barriers to treatment and premature termination from child therapy.
Journal of Consulting & Clinical Psychology, 65, 453–463.
Kazdin, A. E., & Mazurick, J. L. (1994). Dropping out of child psycho-
therapy: Distinguishing early and late dropouts over the course of
treatment. Journal of Consulting and Clinical Psychology, 62, 1069–
Kazdin, A. E., Mazurick, J. L., & Siegel, T. C. (1994). Treatment out-
come among children with externalizing disorder who terminate pre-
maturely versus those who complete psychotherapy. Journal of the
American Academy of Child & Adolescent Psychiatry, 33, 549–557.
Killaspy, H., Banerjee, S., King, M., & Lloyd, M. (2000). Prospective
controlled study of psychiatric out-patient non-attendance: Character-
istics and outcome. British Journal of Psychiatry, 176, 160–165.
King, C. A., Hovey, J. D., Brand, E., Wilson, R., & Ghaziuddin, N.
(1997). Suicidal adolescents after hospitalization: Parent and family
impacts on treatment follow-through. Journal of the American Acad-
emy of Child and Adolescent Psychiatry, 36, 85–93.
King, C. A., Kramer, A., Preuss, L., Kerr, D. C. R., Weisse, L., &
Venkataraman, S. (2006). Youth-nominated support team for suicidal
adolescents (version 1): A randomized controlled trial. Journal of
Consulting and Clinical Psychology, 74, 199–206.
Lai, K. Y., Pang, A. H., Wong, C. K., Lum, F., & Lo, M. K. (1998).
Characteristics of dropouts from a child psychiatry clinic in Hong
Kong. Social Psychiatry and Psychiatric Epidemiology, 33, 45–48.
Landenberger, N. A., & Lipsey, M. W. (2005). The positive effects of
cognitive-behavioral programs for offenders: A meta-analysis of fac-
tors associated with effective treatment. Journal of Experimental
Criminology, 1, 451–476.
Lefforge, N. L., Donohue, B., & Strada, M. J. (2007). Improving session
attendance in mental health and substance abuse settings: A review of
controlled studies. Behavior Therapy, 38, 1–22.
MacNaughton, K. L., & Rodrigue, J. R. (2001). Predicting adherence to
recommendations by parents of clinic-referred children. Journal of
Consulting and Clinical Psychology, 69, 262–270.
Majewicz-Hefley, A., & Carlson, J. S. (2007). A meta-analysis of com-
bined treatments for children diagnosed with ADHD. Journal of
Attention Disorders, 10, 239–250.
McKay, M. M., Hibbert, R., Hoagwood, K., Rodriguez, J., Murray, L.,
Legerski, J., & Fernandez, D. (2004). Integrating evidence-based
engagement interventions into ‘‘real world’’ child mental health set-
tings. Brief Treatment and Crisis Intervention, 4, 177–186.
Miller, W. R., & Rollnick, S. (2002). Motivational interviewing: Prepar-
ing people to change. New York, NY: Guilford.
Miranda, J., Azocar, F., Organista, K. C., Dwyer, E., & Areane, P. (2003).
Treatment of depression among impoverished primary care patients
Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT
statement: Revised recommendations for improving the quality of
reports of parallel-group randomized trials. Annals of Internal Medi-
cine, 134, 657–662.
Nock, M. K., & Ferriter, C. (2005). Parent management of attendance
and adherence in child and adolescent therapy: A conceptual and
empirical review. Clinical Child and Family Psychology Review, 8,
Nock, M. K., & Kazdin, A. E. (2005). Randomized controlled trial
of a brief intervention for increasing participation in parent manage-
ment training. Journal of Consulting and Clinical Psychology, 73,
Olfson, M., Mojtabai, R., Sampson, N. A., Hwang, I., Druss, B., Wang,
P. S., . . . Kessler, R. C. (2009). Dropout from outpatient mental
health care in the United States. Psychiatric Services, 60, 898–907.
Pekarik, G. (1985). Coping with dropouts. Professional Psychology:
Research and Practice, 16, 114–123.
Pekarik, G., & Stephenson, L. A. (1988). Adult and child client differ-
ences in therapy dropout research. Journal of Clinical Child Psychol-
ogy, 17, 316–321.
Pina, A. A., Silverman, W. K., Weems, C. F., Kurtines, W. M., & Gold-
man, M. L. (2003). A comparison of completers and noncompleters
of exposure-based cognitive and behavioral treatment for phobic and
anxiety disorders in youth. Journal of Consulting and Clinical Psychol-
ogy, 71, 701–705.
Prins, P. J. M., & Ollendick, T. H. (2003). Cognitive change and
enhanced coping: Missing mediational links in cognitive behavior
therapy with anxiety-disordered children. Clinical Child and Family
Psychology Review, 6, 87–105.
Reid, R. (2005). Self-regulation interventions for children with attention
deficit⁄hyperactivity disorder. Exceptional Children, 71, 361–377.
Richardson, L. A. (2001). Seeking and obtaining mental health ser-
vices: What do parents expect? Archives of Psychiatric Nursing, 15,
Rotheram-Borus, M. J., Piacentini, J., Van Rossem, R., Graae, F.,
Cantwell, C., Castro-Blanco, D., & Feldman, J. (1999). Treatment
adherence among Latina female adolescent suicide attempters. Suicide
& Life-Threatening Behavior, 29, 319–331.
Shivack, I. M., & Sullivan, C. W. (1989). Use of telephone prompts at
an inner-city outpatient clinic. Hospital & Community Psychiatry, 40,
Steinberg, M. L., Ziedonis, D. M., Krejci, J. A., & Brandon, T. H.
(2004). Motivational interviewing with personalized feedback: A brief
intervention for motivating smokers with schizophrenia to seek treat-
ment for tobacco dependence. Journal of Consulting & Clinical Psy-
chology, 72, 723–728.
Swift, J. K., Callahan, J. L., & Levine, J. C. (2009). Using clinically sig-
nificant change to identify premature termination. Psychotherapy:
Theory, Research, Practice, and Policy, 46, 328–335.
Taylor, E., & Stansfield, S. (1984). Children who poison themselves: A
clinical comparison with psychiatric controls. British Journal of Psy-
chiatry, 145, 127–132.
Weersing, V., & Weisz, J. R. (2002). Community clinic treatment of
depressed youth: Benchmarking usual care against CBT clinical trials.
Journal of Consulting and Clinical Psychology, 70, 299–310.
Weisz, J. R., McCarty, C. A., & Valeri, S. M. (2006). Effects of psycho-
therapy for depression in children and adolescents: A meta-analysis.
Psychological Bulletin, 132, 132–149.
Weisz, J. R., Weiss, B., Alicke, M. D., & Klotz, M. L. (1987). Effective-
ness of psychotherapy with children and adolescents: A meta-analysis
for clinicians. Journal of Consulting and Clinical Psychology, 55, 542–
Weisz, J. R., Weiss, B., Han, S. S., Granger, D. A., & Morton, T.
(1995). Effects of psychotherapy with children and adolescents revis-
ited: A meta-analysis of treatment outcome studies. Psychological Bul-
letin, 117, 450–468.
Wierzbicki, M., & Pekarik, G. (1993). A meta-analysis of psychother-
apy dropout. Professional Psychology: Research and Practice, 24,
Winfred, A., Winston, B., & Huffcutt, A. I. (2001). Conducting meta-
analysis using SAS. Mahwah, NJ: Lawrence Erlbaum Associates,
Survey of Adherence-Promoting Interventions*:
Subsection on Telephone Promoters
Concrete Adherence-Promoting Interventions
The next series of questions asks whether you have used any
of the following concrete strategies to increase adherence:
h Telephone reminders
h Mailed reminders
h Technology-assisted correspondence
h Increasing convenience
When you answer ‘‘yes,’’ you will be directed to a set of ques-
tions to elaborate on the strategies.
1. Did your study protocols include telephone reminders?
2. Who was the intended recipient of the phone call?
h Parent or caregiver of the child⁄adolescent
h Both parent or caregiver and child⁄adolescent
h Other (please specify) _________________________
3. How often were telephone reminders provided?
h 100% of session
h More than 50% of all sessions
h 50% of all sessions
h Less than 50% of all sessions
h For the first session only
h As needed
SCHWALBE AND GEARING
4. How much staff time was devoted for each telephone Download full-text
h 5 min or less
h 10 min or less
h 20 min or less
h More than 20 min
5. What were the qualifications of the person implementing
h High school or equivalent
h Undergraduate research assistant or equivalent
h Graduate research assistant or equivalent
h Postgraduate clinical⁄research staff
6. Estimate the approximate number of hours of pretreatment
h 1 hr or less
h 4 hr or less
h 8 hr or less
h 16 hr or less
h More than 16 hr
7. How frequently was training⁄supervision to conduct tele-
phone reminders provided during the course of the mental
h Once (or more) weekly
h Twice monthly
h Once monthly
h Less than once monthly
*The full questionnaire is available by contacting the authors.