ArticlePDF Available

Abstract

Research in the area of school mental health (SMH) has undergone rapid evolution and expansion, and as such, studies require the use of diverse and emerging methodologies. In parallel with the increase in SMH research studies has been greater realization of the complex research methods needed for the optimal measurement, design, implementation, analysis, and presentation of results. This paper reviews key steps needed to effectively study SMH research questions. Considerations around research designs, methods for describing effects and outcomes, issues in measurement of process and outcomes, and the foundational role of school and community research partnerships are discussed within the context of SMH research studies. Ongoing developments within SMH research methods are presented as illustrative examples.
ORIGINAL PAPER
Methodology Considerations in School Mental Health Research
Gregory A. Fabiano
Sandra M. Chafouleas
Mark D. Weist
W. Carl Sumi
Neil Humphrey
Published online: 4 January 2014
Ó Springer Science+Business Media New York 2014
Abstract Research in the area of school mental health
(SMH) has undergone rapid evolution and expansion, and
as such, studies require the use of diverse and emerging
methodologies. In parallel with the increase in SMH
research studies has been greater realization of the complex
research methods needed for the optimal measurement,
design, implementation, analysis, and presentation of
results. This paper reviews key steps needed to effectively
study SMH research questions. Considerations around
research designs, methods for describing effects and out-
comes, issues in measurement of process and outcomes,
and the foundational role of school and community
research partnerships are discussed within the context of
SMH research studies. Ongoing developments within SMH
research methods are presented as illustrative examples.
Keywords School mental health Research methods
Research design Outcome measurement
The emergence of school mental health (SMH) as a distinct
area of inquiry and practice has generated greater realiza-
tion of the complex research methods needed for the
measurement, design, implementation, analysis, and pre-
sentation of results to key stakeholders. Studies of SMH
focus on development, efficacy, effectiveness, implemen-
tation, and systems. This breadth of focus has resulted in
utilization of a greater variety of designs and, conse-
quently, the ability to answer more complex and precise
research questions relevant to SMH. However, this
expansion also has increased complexities related to
the systematic advancement of avenues for SMH
research, including the demands on researcher–practitioner
partnerships.
SMH researchers face diverse challenges that may not
be as common in university or clinic-based research.
School settings are dynamic and complex, and the SMH
research often occurs in the context of competing priorities
(e.g., state and federal education department mandates) and
rule-governed limitations (e.g., teacher union restrictions
on training time and classroom observations). Schools also
typically utilize numerous mental health and special edu-
cation services and supports, social and character devel-
opment programs, and other ancillary and supportive
services that are ongoing in parallel with any novel SMH
research, which can impact the magnitude and interpreta-
tion of results. Further, the primary aim of schools is the
promotion of academic achievement. Thus, to the extent
mental health efforts are seen to diverge from that goal,
SMH research may face challenges in the current school
climate characterized by an emphasis on student achieve-
ment outcomes and teacher accountability. It is difficult to
review an exhaustive list of SMH research considerations
in a single manuscript. Rather, the intent of this manuscript
is to provide some important considerations for novice as
G. A. Fabiano (&)
Department of Counseling, School and Educational Psychology,
University at Buffalo, SUNY, Buffalo, NY 14260, USA
e-mail: Fabiano@buffalo.edu
S. M. Chafouleas
Department of Educational Psychology, Neag School of
Education, University of Connecticut, Storrs, CT, USA
M. D. Weist
Department of Psychology, University of South Carolina,
Columbia, SC, USA
W. Carl Sumi
SRI International, Menlo Park, CA, USA
N. Humphrey
University of Manchester, Manchester, UK
123
School Mental Health (2014) 6:68–83
DOI 10.1007/s12310-013-9117-1
well as experienced SMH researchers and to provide some
illustrative examples of SMH research as applied in the
current literature. Content is organized under sections
describing designs and analyses that can be utilized in
SMH research, methods for describing effects and out-
comes in SMH studies, considerations in measurement of
SMH process and outcomes, and the foundational role of
school and community partnerships in this research, as
these are the steps a researcher is likely to follow in the
planning of a SMH study. As appropriate, considerations
that are specific to SMH research applications are
discussed.
Research Designs Utilized in School Mental Health
SMH investigators interested in answering research ques-
tions must ensure their research design is aligned with the
questions posed. Research designs in SMH include many
of those typically employed in basic and applied research.
These methods include descriptive and observational
research (Berkowitz, 1968; Schnoes et al., 2006), longitu-
dinal studies (e.g., Wagner, Kutash, Duchnowski, Epstein,
& Sumi, 2005), single-case designs (O’Leary & O’Leary,
1982), randomized trials (e.g., Power et al., 2012; Weist
et al., 2009; Weisz, Thurber, Sweeney, Proffitt, & LeG-
agnoux, 1997), and cluster randomized trials (Social and
Character Development Research Consortium, 2010). The
large number of designs and methods included under the
auspices of SMH offer both a blessing and a curse. On the
one hand, there are numerous methodological approaches
available to answer a number of research questions; how-
ever, the diversity of methods can make it difficult to
aggregate knowledge across studies and research teams.
This points to the need for systematic reviews and meta-
analyses to utilize innovations in this field related to the
aggregation of disparate methods or approaches to research
synthesis that are inclusive of multiple study designs (see
DuPaul, Eckert & Vilardo, 2012 as an example). Below,
we briefly review the study designs available for study in
SMH, followed by a discussion of design considerations
important for SMH researchers.
Single-Case Designs
There is a long history of implementing rigorous, single-
subject design studies in SMH research (e.g., O’Leary &
O’Leary, 1982). In fact, a review of the early issues of the
Journal of Applied Behavior Analysis illustrates multiple
school-based, single-case studies that address behavioral,
social, or emotional challenges in the school setting. Thus,
single-case methods are well established and, when
properly designed and implemented, can permit causal
inference (Kazdin, 1982; Riley-Tillman & Burns, 2009).
Reflecting the large single-subject literature in school set-
tings, the What Works Clearinghouse, which is the dis-
semination portal for scientific findings in education,
developed and began to implement guidelines for weighing
the evidence from single-subject design studies (Kratoch-
will et al., 2010).
A variant of single-subject design studies are crossover
or repeated measures designs wherein interventions are
introduced and withdrawn for a group of students (e.g.,
Pfiffner & O’Leary, 1987). These designs are efficient,
given that students serve as their own control, but they also
have limitations such as a risk of contamination across
conditions or other carry-over effects. For the most part,
these designs have been used most frequently in analogue
(i.e., experimental) classroom settings (e.g., Fabiano, et al.,
2007; Pelham et al., 1993; Pfiffner, Rosen, & O’Leary,
1985), but there are examples within applied settings such
as day treatment programs (Kolko, Bukstein, & Barron,
1999). Crossover designs are useful when there is little risk
of carry-over effects across conditions; for instance, a
crossover design may be used in a study where a positive or
negative consequence is systematically introduced and
removed.
Randomized Controlled Trials
A number of prominent examples of randomized controlled
trials (RCTs) appear within the SMH literature, with sub-
stantially increased numbers following the creation of the
Institute of Education Sciences in 2002. To illustrate the
rapid change in the use of RCTs in educational contexts, a
heuristic search in the online ERIC database was conducted
by entering the search term ‘‘randomized trial.’’ There were
35 results returned falling between 1978 and 1993, 95
results between 1994 and 2003, and 1,602 results from
2004 through June 2013. Clearly, RCTs in schools have
been increasing in prominence, and with increased use has
come rapid innovations for SMH research. Examples of
these innovations to determine intervention efficacy are
reviewed next.
Cluster Randomized Controlled Trials
A key recent advancement relates to the increase in use of
cluster randomized controlled trials (CRCTs). In CRCTs,
the district, school, or classroom is the unit of randomi-
zation rather than the individual, which is often preferable
in school-based evaluations because it guards against
treatment diffusion/contamination (Puffer, Torgerson, &
Watson, 2005). Additionally, the nature of many SMH
interventions, which often involve universal interventions,
School Mental Health (2014) 6:68–83 69
123
can make individual randomization impractical. A recent
example of the application of the CRCT design to SMH
can be found in Fonagy et al. (2009), which examined the
efficacy of child-focused psychiatric consultation and a
school system-focused intervention on aggression and
victimization among elementary schoolchildren. In this
study, the authors randomly allocated schools to one of two
intervention conditions or a ‘treatment as usual’ control
condition. In this manner, school-wide implementation of
the independent variable occurs, reducing risk of contam-
ination across classrooms or cases.
The analytical technique of choice that accompanies
CRCTs is hierarchical linear modeling (HLM), also known
as multi-level modeling. HLM may be preferable to tradi-
tional techniques (such as analysis of variance) because it is
able to account for the hierarchical and clustered nature of
data drawn from intact social units (such as schools and/or
classrooms) (Raudenbush & Bryk, 2002). Scores on a given
outcome variable of children attending the same school are
correlated—this is known as the intra-cluster correlation, or
ICC (Killip, Mahfoud, & Pearce, 2004). When the ICC is not
taken into account, spurious results may occur (Muijs, 2010).
An additional benefit of HLM is that it allows the researcher
to assess the contribution of cluster-level explanatory vari-
ables above and beyond that of treatment group. For exam-
ple, Humphrey, Lendrum, Barlow, Wigelsworth, and
Squires (2013) were able to demonstrate the role played by a
range of school-level contextual factors (e.g., proportion of
students with disabilities) and implementation activities
(e.g., school leadership, intervention fidelity, and dosage) in
mediating the impact of the Achievement for All program on
psychosocial outcomes.
Despite the numerous advantages conferred by CRCTs,
a number of challenges for SMH research are presented.
Chief among these are selection bias and differential
attrition, which can occur at two levels (cluster and indi-
vidual) in a CRCT (Puffer et al., 2005). Effective sample
size is also an issue, and calculations for cluster trials have
to include consideration of the ICC and average cluster size
in addition to the usual estimated effect size, power, and
alpha level (see Campbell, Thomson, Ramsey, MacLen-
nan, & Grimshaw, 2004: http://www.abdn.ac.uk/hsru/
sampsize/ssc.exe; or Raudenbusch et al., 2011). CRCTs
have reduced statistical efficiency relative to trials using
individual randomization (Donner & Klar, 2004). This
design effect varies as a function of the ICC and average
cluster size (Kerry & Bland, 1998). The end result is that
CRCTs typically require much larger samples than indi-
vidually randomized trials in order to be adequately pow-
ered (Puffer et al., 2005). Additionally, accurately
estimating the likely ICC for the primary outcome variable
in a SMH CRCT can be challenging. ICCs for mental
health variables recorded in existing research are a useful
reference but vary greatly depending upon the exact nature
of the construct being assessed (e.g., anxiety, depression,
conduct problems), source of data (e.g., child, parent, tea-
cher), phase of education (e.g., elementary, middle, high),
and other factors (see Sellstro
¨
m & Bremberg, 2006, for a
review). Additional research is needed to provide guidance
on expected ICCs within SMH contexts.
Adaptive Treatment Designs
Innovative designs that permit new research questions to be
answered have also been developed, such as adaptive
treatment designs (Murphy, 2005). Adaptive treatment
designs begin participants in an intervention, but based on
the ongoing progress monitoring, randomly assign partici-
pants to additional intervention(s) in a systematic manner.
In addition to efficacy questions, questions related to
sequencing and dosage can also be addressed. Adaptive
treatment designs are an alternative to fixed design clinical
trials where an individual is assigned to an intervention,
and that intervention is the same for everyone, regardless
of response. Although this fixed intervention approach is
useful for documenting the efficacy of interventions, it is
limited in that it does not accurately mirror what happens
in SMH practice; if a child does not respond to a SMH
intervention, good practitioners would typically try an
alternative intervention until one that was effective was
used (notably, this approach is the foundation of widely
employed approaches in school-based service delivery such
as Response to Intervention).
Recently, adaptive treatment designs have been initiated
for SMH applications (Nahum-Shani et al., 2012a, b
).
Pelham and colleagues (Nahum-Shani et al., 2012a, b)
illustrated how adaptive treatment designs can be used to
answer SMH questions that were previously unaddressed.
In this study, children with ADHD were randomly assigned
to begin the school year with a low dose of behavior
therapy (i.e., behavioral parent training and a school-based
daily report card) or a low dose of stimulant medication.
After the first 2 months of school, and every month for the
school year thereafter, children were assessed for impair-
ment in functioning. If a child was found to be progressing
appropriately, the initial treatment regimen was main-
tained. However, if a child was determined to be impaired
in functioning, he or she was randomly assigned to receive
a greater dose of the initial treatment, or to crossover to the
other treatment. At the end of the study, there were six
groups of participants—those who received the initial
treatment and required nothing more (i.e., low-dose med-
ication or low-dose behavior therapy), those who received
higher doses of the initial treatment (i.e., began with
medication and had medication dose increased or began
with behavior therapy and had the intensity of the
70 School Mental Health (2014) 6:68–83
123
behavioral intervention increased), or those who received
combined treatment (i.e., started with medication and had
behavior therapy added or started with behavior therapy
and had medication added). This approach allows novel
SMH questions to be addressed that are simply not possible
within the framework of traditional, fixed RCTs. Future
studies leveraging this study method might address
Response to Intervention frameworks for emotional and
behavioral challenges or appropriate sequencing for coun-
seling interventions in schools using an adaptive treatment
design methodology.
Mixed-Method Designs
An additional emerging area in SMH research is the use of
mixed-method designs including both qualitative and
quantitative analyses. Qualitative inquiry can yield impor-
tant explanatory data and can also be particularly useful in
exploratory studies or when quantitative methods are not
feasible or appropriate. For example, when an intervention
is first being piloted, questions of acceptability, feasibility,
and social validity may be paramount and are more ame-
nable to qualitative data gathering and analytical techniques
(see Langberg et al., 2011; Pfiffner et al., 2011; Shernoff
et al., 2011 as examples). In addition, qualitative data can
help answer questions when quantitative findings are pre-
liminary or puzzling.
Informative data can also be generated using innovative
approaches from other fields. For instance, prior to inter-
vention development, marketing research strategies that use
conjoint analyses to investigate the trade-offs and important
considerations consumers make in their choices related to
engagement or participation in interventions offer an inno-
vative, yet under-explored strategy to help promote the
acceptability of developed interventions or assessments
(Cunningham, Vaillancourt, et al., 2009; Spoth & Redmond,
1993). For instance, a recent consumer preference survey of
nearly 1,200 educators used a discrete choice, conjoint
analysis to determine preferences for a bullying prevention
program (Cunningham et al., 2009). Results indicated that
there were multiple segments of educators within the sample,
with each having different preferences for bullying preven-
tion interventions; for example, some preferred to select
bullying prevention programs, whereas others preferred
school boards to make this decision. Such quantitative
approaches may assist with the development of interventions
that will be feasible and well received by the intended
implementers and/or consumers.
Summary
In contrast to other fields in which between-group, ran-
domized trials are emphasized (Agency for Healthcare
Research, 2013), it is clear that a variety of research
designs are appropriate for answering SMH-related ques-
tions. This affords investigators latitude in their study
planning, but it also creates challenges in that researchers
must select the most powerful design and integrating data
across various designs may be difficult. As the SMH field
matures, it is also imperative that new and innovative
approaches be developed for accounting for common
problems in this type of research.
Measurement of Process and Outcome in SMH
Research
Following the selection of a research design appropriate for
the study questions, researchers will then identify appro-
priate measures for the study. Kazdin (2005) noted that a
quest for evidence-based behavioral assessment requires
acknowledgment of multiple complexities, with perhaps
foremost among the issues being the absence of a gold
standard criterion in SMH. Unlike definitive academic
indices such as passing an annual state mastery test, a
simple indicator of behavioral, social, and emotional well-
being does not exist. Additional issues include recognition
that multiple measures will likely be needed to evaluate
different facets, and the good possibility that the measures
will not necessarily agree. In this section, we frame a
review of issues associated with measurement in SMH
research around considerations that should be reviewed
when planning measurement in SMH research.
Mapping the Theory of Change
The initial step toward measure selection does not actually
involve measurement, but rather, it requires mapping the
theory of change for the proposed intervention. Identifi-
cation of the relevant features of the problem along with
goals and expectations should occur in tandem with the
design selection outlined above, and this occurs before
engaging in selection of specific measures. A theory of
change articulates clear understanding of the goal and the
specific components hypothesized to get to that outcome,
and a comprehensive picture considers identification of not
only the long-term goal but also the short and intermediate
expectations. Further, a well-articulated theory of change
identifies the mediators and moderators expected to impact
outcomes, and these parameters all become important foci
of measurement efforts.
Although a theory of change could become potentially
complex, it is important to present it in an uncomplicated
way to facilitate clear mapping to all of the pieces. In other
words, a reader should be able to look at a figure outlining
the theory of change and understand, in broad terms, what
School Mental Health (2014) 6:68–83 71
123
is being proposed and what is to be expected. An additional
consideration involves understanding the current context
and appropriately situating the presentation within that
context. For example, the current context of SMH suggests
a shift in expected outcomes away from sole focus on
pathology in more clinical settings to outcomes that also
include promotion of behavioral health, academic
achievement, and change in systems and implementers of
an intervention. As discussed next, this shift poses addi-
tional measurement challenges given a general dearth of
associated measures and informed decisions about what
effect might reasonably be expected.
High-quality and credible assessments are those that are
carefully matched to the purpose of the evaluation and
delineation of the unique and specific requirements for that
purpose (Kazdin, 2005). These unique and specific
requirements include attention to psychometrics, sensitivity
to change, and understanding of clinical meaningfulness.
For example, in a high-stakes decision such as placement
in an alternative setting, stronger psychometrics serve a
more critical role than a low-stakes decision such as
instructional modification. These examples highlight the
differences between measurement with a purpose for for-
mative (i.e., progress monitoring) versus summative out-
come assessment. SMH intervention has historically been
dominated by the latter, yet growing use of multi-tiered
models of service delivery has pushed fervent interest in
adaptive intervention techniques, which allow for planned
modification based on the responsiveness. Unfortunately,
assessments in SMH were not quite ready for this shift
from summative purposes historically focused on measur-
ing psychopathology (Chafouleas, Volpe, Gresham, &
Cook, 2012). The shifting focus to adaptive intervention
techniques informed by frequent progress monitoring has
demanded assessment research to adapt old measures and
develop new ones that can meet formative needs. One final
point relates to clinical meaningfulness, or the degree of
change that results in improved functioning as perceived by
the individual and those around him. This change may not
be appropriately captured within large-scale research
studies focused on average change across participants, and
often requires ideographic analysis of effects for the indi-
vidual (Volpe & Fabiano, 2013). Indicators of outcome
should be socially valid and represent idiographic areas of
adaptive improvements and impairment reductions, which
are the typical targets of SMH interventions (Fabiano et al.,
2006; Mash & Hunsley, 2005; Pelham, Fabiano, & Mas-
setti, 2005).
Measure Selection in SMH Research
SMH researchers need to carefully consider whether a
measure used in prior mental health or school-based
research is appropriate for SMH applications. Measure
selection in the absence of supporting justification should
be avoided. Office discipline referrals might serve as one
illustration of the need for caution. Office discipline
referrals are generally considered as a form of permanent
product regarding student behavior. Sugai, Horner, and
Walker (2000) defined an office discipline referral as ‘an
event in which (a) a student engaged in a behavior that
violated a rule or social norm in the school, (b) a problem
behavior was observed by a member of the school staff,
and (c) the event resulted in a consequence delivered by
administrative staff who produced a permanent (written)
product defining the whole event (p 96).’ Office discipline
referrals have been associated with poor outcomes such as
school failure and juvenile delinquency. A central starting
question in choosing to use office discipline referrals as a
dependent variable is whether a link to poor outcomes
should be surprising given the defining features of an office
discipline referral. That is, the definition supports that
deviance from social norms has been overtly identified and
acted upon by adults. Over 20 years ago, Martens (1993)
noted that the implicit normative comparisons made by
teachers suggest that by the time an office discipline
referral is received, student behavior is considered far from
being acceptable. Thus, decisions to use discipline referrals
assessment purposes such as the prediction of future
problem behavior might not be appropriate. For example,
more recent work has suggested that substantial under-
identification of students at risk can occur through use of
office discipline referrals (Miller, Welsh, Johnson, Cha-
fouleas, Riley-Tillman, & Fabiano, 2013; Nelson, Benner,
Reid, Epstein, & Currin, 2002).
Additional findings have supported low to moderate
agreement between office discipline referrals and behav-
ioral ratings (e.g., Nelson et al., 2002), calling into question
the degree to which the office discipline referral can serve
as a direct and objective measure of student functioning. In
addition to questions regarding a lack of standardization of
instrumentation and procedures across settings, substantial
concerns have been noted regarding disproportionality
around use of office discipline referrals given higher rates
among children who are black, male, and designated with
disabilities (e.g., Skiba, et al., 2011). Perhaps even most
concerning are suggestions that receipt of office discipline
referrals for various subgroups can lead to more harsh and
severe consequences such as suspension and expulsion
(Losen & Gillespie, 2012; Skiba et al., 2011
).
Despite these cautions, office discipline referrals are
widely used as dependent measures in research on student
behavioral health. For example, a brief PsychInfo search
using the term results in hundreds of articles, the majority
of which used the measure as a key criterion or dependent
variable (Chafouleas, Briesch, Riley-Tillman, Christ,
72 School Mental Health (2014) 6:68–83
123
Volpe, & Gresham, 2009). Further search reveals that a
limited number of studies have specifically detailed
investigations of validity, with the majority of those related
to examination for purposes in school-wide indices rather
than purposes such as individual formative assessment.
Taken together, this example provides illustration of the
need to heed careful attention to measure selection, vali-
dation, and appropriate use in SMH study.
Another consideration in measure selection relates to the
typical reliance on multiple methods and multiple raters in
SMH studies. Although there is relatively low agreement
found among different types of rating scale users (e.g.,
parent, teacher, child; Achenbach, McConaughy, & Ho-
well, 1987), this lack of inter-rater reliability does not mean
that certain raters are uninformative. It is often the case that
raters might be selected based on their appropriateness for
answering specific research questions related to targeted
constructs of interest. For example, Humphrey et al. (2010)
evaluated the impact of a short, social–emotional inter-
vention targeted toward children at risk for developing
mental health difficulties in elementary schools. The
authors reported significant improvements in functioning
on child self-report measures, but null results for teacher-
and parent-rated measures. In such situations, consideration
needs to be given as to which perspective is prioritized
when deciding whether the intervention can be deemed
‘effective,’ alongside the inherent advantages and disad-
vantages of that particular viewpoint (Wigelsworth, Hum-
phrey, Kalambouka, & Lendrum, 2010). For instance,
adults have been illustrated to be poor informants instances
of bullying or peer interactions (Craig & Pepler, 1998),
whereas children are in some instances poor self-evaluators
for externalizing behaviors (Pelham et al., 2005).
Depending on the outcomes of interest and the measures
collected, different raters or measures may be therefore
prioritized, and these prioritizations should be made a
priori based on the larger research literature to allow SMH
researchers to identify primary outcomes and reduce sta-
tistical tests and/or spurious outcomes for measures that are
not hypothesized to change due to the intervention. SMH
researchers are further encouraged to align measurement
approaches with the purposes of the assessment (e.g.,
diagnostic, prognostic, assessment of outcome/progress;
see Mash & Hunsley, 2005).
Selected Measures Must Be Usable by the Intended
Respondents
Even if a researcher is able to devise a ‘perfect’ plan for
assessment, that plan likely will not come to fruition
without additional attention to the usability of the measures
by the intended respondent. Issues of assessment integrity
can plague researchers and unfortunately can go un-noticed
until after data collection occurs. This is a consideration in
SMH research in particular, given the diverse collection of
potential respondents (i.e., students, teachers, parents,
administrators, observers). Similar to the careful attention
researchers direct to training research personnel to deliver
reliable direct observations of students, attention should be
provided to ensure appropriate training of any person who
is completing measures. Notably, researcher efforts can
reduce these disparities—to improve agreement among
users of Direct Behavior Rating–Single Item Scales (DBR-
SIS: www.directbehaviorratings.org), training components
that balance efficiency with capacity to assist users in
anchoring ratings to improve overall agreement have been
investigated. One resulting product is a module that users
must complete before beginning data collection intended
for use research studies, with recommendations that any
DBR user engage in module completion before adoption of
the measure occurs (Chafouleas, Kilgus, Riley-Tillman,
Jaffery, & Harrison, 2012).
Components of school-based intervention usability have
been hypothesized and studied over many years, with
empirical work supporting factors involving acceptability,
understanding, home–school collaboration, feasibility,
systems climate, and systems support (Briesch, Chafouleas,
Neugebauer, & Riley-Tillman, 2013). More recently, these
same components have been examined within the context
of school-based assessment, with results supporting a
consistent factor structure (Miller, Neugebauer, Chafou-
leas, Briesch, & Riley-Tillman, 2013). Given that SMH
research findings are dependent on many sources of
information from different people working within complex
systems surrounding the child, this work supports a place
for the evaluation of assessment usability. In addition,
evaluations can provide information to help researchers
weigh the cost/benefit of using potentially resource inten-
sive assessments.
Appropriately Acknowledge Limitations of Measures
Another recommendation regarding measurement issues in
SMH is to appropriately acknowledge limitations of mea-
sures. In addition to the expected limitations around psy-
chometrics, it is important to link identification of
limitations back to the research question and hypotheses. It
is incumbent upon the researcher to provide a priori
statements regarding what might be expected with regard
to outcomes for both targeted and broad or related mea-
sures. Parallels might be drawn from the academic realms
to help for those statements about expectations in SMH
studies. As one illustration, Lipsey et al., (2012) have noted
that larger effects tend to be identified for individual stu-
dents than for whole classes and schools and that larger
effects tend to be noted for targeted rather than broad
School Mental Health (2014) 6:68–83 73
123
measures. As we move toward building a solid base of
knowledge around areas in behavioral assessment, it is
important to clearly understand the interplay between a
study design and the previously noted complexities asso-
ciated with evidence-based assessment.
Measurement Should Include Integrity and Fidelity
Monitoring
SMH studies must demonstrate that assessments and
interventions were the following: (1) introduced in a
manner that promoted understanding and uptake; (2)
implemented as intended; and (3) implemented with a high
degree of quality. In educational and clinical research,
these parameters have been referred to as adherence,
fidelity, competence, and integrity, among other terms
(Cordray & Pion, 2006; Dane & Schneider, 1998; Dumas
et al., 2001; Gresham, 1989; Lane, Beebe-Frankenberger,
Lambros, & Pierson, 2001; Moncher & Prinz, 1991; Power,
et al., 2005; Schulte, Easton, & Parker, 2009; Waltz, Addis,
Koerner, & Jacobson, 1993). Leading to a lack of clarity in
the SMH research literature, these terms have been used
either interchangeably or without complete overlap. Issues
associated with variability across definitions of treatment
integrity and fidelity in school settings have been well
described (Hagermoser-Sanetti & Kratochwill, 2009).
In SMH research, it is recommended that clear language
be used in the description of intervention integrity and
fidelity. A basic recommendation is that the constructs of
integrity and fidelity should be disentangled in SMH
research. Integrity refers to whether the implementation of
the assessments and interventions occurred as intended,
whereas fidelity refers to the quality with which the pro-
cedures were implemented. By separating these two con-
structs, SMH researchers can ensure that they are
conducting the interventions in a manner that is consistent
with the planned approach, and also using strategies that
promote genuine and meaningful implementation. Thus, in
the same way a radio needs to be tuned to the right station
(i.e., integrity) and the song must also be clearly trans-
mitted and free of static (i.e., fidelity), SMH interventions
must be implemented correctly and the delivery must be of
high quality.
Well-conceived integrity and fidelity assessments also
need to focus on the control or comparison groups as much
as the intervention group. Waltz et al. (1993) emphasized
that treatment integrity assesses how prescribed interven-
tions are implemented but also that proscribed interven-
tions are not implemented. As discussed, in SMH research,
it is difficult to have a pure control group where no inter-
vention is implemented. For instance, SMH interventions
that include behavior management strategies need to
account for background levels of related strategies across
all study groups. This use of background interventions was
speculated as a potential reason for the largely null out-
comes reported in the Social and Character Development
cluster randomized trial (Social and Character Develop-
ment Research Consortium, 2010). Yet, it is also notable
that there are differences in school reports of using SMH
interventions relative to school implementation of such
intervention (Gottfredson & Gottfredson, 2001). A related
concern relates to contamination across intervention and
control groups. SMH researchers must ensure that there is
no spill-over from the intervention into control groups due
to teachers, students, or other providers interacting across
groups, observation of one group by the other, or through
other means such as a principal advising individuals in the
comparison condition to use strategies implemented in the
intervention condition. An additional benefit of collecting
this information reliably across all study groups is that
these data can be used in analyses of mediation of
outcomes.
Beyond integrity and fidelity, SMH researchers should
also document adaptation (the nature and extent of changes
made to the intervention by those providing it), dosage
(how much and/or how frequently an intervention is
delivered), participant responsiveness (the degree to which
children and adults engage with the intervention), program
differentiation (the distinctiveness of the intervention in the
context of existing practice), and reach (what proportion of
the target population actually receives the intervention)
(Humphrey, 2013). Traditionally, research has focused
rather narrowly on implementation integrity and dosage
(Durlak & DuPre, 2008). Similarly, if not all aspects of
implementation are considered, and dosage and integrity
are measured as high, poor outcomes may be incorrectly
seen as program/theory failure (e.g., a ‘Type III error’’)
rather than a consequence of low levels of these alternative
parameters (Lendrum & Humphrey, 2012).
Developmental Considerations in SMH Measurement
A final consideration for measurement in SMH studies
relates to the developmental level of the children being
studied. SMH is unique in that, it can span the pre-kin-
dergarten to young adult (i.e., college) age range. This wide
age-span creates complications for SMH researchers. For
instance, in elementary grades, school children have only a
single teacher, which makes the collection of teacher rat-
ings for progress monitoring or outcome relatively
straightforward. However, in middle and high school
grades, children can have numerous teachers, making this
decision much more complex. Compounding the issue of
multiple teachers is that each teacher also interacts with the
child for much less time during the school day. There is no
clear guidance regarding the best way to obtain teacher
74 School Mental Health (2014) 6:68–83
123
ratings in middle and high school settings within SMH
studies, and available research would recommend cautions
when conducting analyses across raters (Chafouleas, et al.,
2010a, b; Evans, Allen, Moore, & Strauss, 2005).
A final issue to consider in SMH research is to ensure
measures are sensitive to developmental differences.
Assessment batteries that are lengthy or require long
stretches of sustained attention are unlikely to be feasible
for young children. On the other hand, measures that focus
on elementary school behavior may ask questions of less
import for older students, or be insensitive to develop-
mental changes across school levels. This point was illus-
trated in a recent paper that showed standard ratings of
ADHD symptoms completed by teachers in high school
had much lower endorsement of hyperactive/impulsive
symptoms than that found in elementary school samples
(Evans et al., 2013). Thus, care must be taken to use items
and norms that are developmentally appropriate, rather
than simply upwardly or downwardly extending a measure.
Further, SMH researchers engaged in longitudinal research
must consider the impact of developmental changes in
assessment planning.
Methods for Describing Effects and Outcomes
The preceding discussion of design is important as choices
in study design have consequences for expected effects.
Estimates of effect size are critical in SMH studies to help
describe the magnitude of intervention effects for scholars,
policy makers, and practitioners. However, there is often
confusion among these groups with regard to their mean-
ingfulness. For instance, many scholars continue to cite the
seminal recommendation by Cohen (1992) that character-
izes effect sizes in group design studies of .20 as ‘small,’
.50 as ‘medium,’ and .80 as ‘large’’. Cohen intended to
push the field toward increased attention to issues of power
and effect size magnitude in psychological research,
qualifying this characterizations of effect size by noting,
‘In this bare bones treatment, I cover only the simplest
cases, the most common designs and tests, and only three
levels of effect size’’ (Cohen, 1992, p. 156). In spite of this
qualification, the field has emphasized Cohen’s heuristic
descriptors of ‘small, medium, and large’ for effect size
estimates, which is problematic in SMH research. Under-
standing the composition of the experimental and control
groups is essential in order to understand the meaning of
the metric, which is particularly important in SMH research
given researchers are rarely comparing an intervention to
the absence of intervention. More often, business as usual
in schools includes a variety of interventions, which may
have some effect. In other words, effect sizes would be
attenuated, making large effects in the traditional sense
more difficult to obtain, and perhaps indicating small
effects are more meaningful. Lipsey et al. (2012) com-
prehensively address this point in education-related
research, which is related to SMH study. It is also worth
mentioning that sometimes an equivalent effect is a
meaningful outcome, such as in the case of comparing a
novel intervention to a gold standard intervention; if both
interventions result in improvements relative to some
comparison, and the two interventions are no different from
one another, the new intervention may have merit (Loni-
gan, Elbert, & Johnson, 1998). Thus, it is recommended
that these qualifiers of effect size magnitude be avoided
unless the context in which they were generated is clearly
specified. An effect size of .20 could be quite substantial or
quite modest, depending on the outcome, the presence of
concurrent interventions, or the cost/intensity of the inter-
vention under consideration. Further, study designs must be
considered when interpreting effects as effect sizes gener-
ated from group design studies are not comparable to those
generated from crossover or single-subject designs (Fabi-
ano et al., 2010; DuPaul et al., 2012). Overall, SMH
researchers are encouraged to increase the precision with
which the conditions used to generate effect size estimates
are described, given the high likelihood of concurrent and/
or competing interventions within the background of many
SMH efforts.
Researchers in SMH should also be cognizant of the
outcomes addressed within studies as the effect size esti-
mate may likely vary based on the outcome. For instance, it
is common for SMH studies to focus on outcomes related
to psychiatric conditions (e.g., anxiety, ADHD). Because
the studies are also implemented in schools, academic
outcomes such as grades or productivity may also be
addressed. However, the magnitude of the expected effect
may differ depending on the intervention employed and the
target of intervention. For example, a recent meta-analysis
by DuPaul et al. (2012) reviewed the outcomes of school-
based interventions for ADHD. In group design random-
ized trials, the effect size for dependent measures related to
behavior was .72. In comparison, academic outcome effect
sizes were .43. Further, whereas the behavioral outcomes
demonstrated statistically significant improvement, aca-
demic outcomes did not. These findings illustrate the need
for researchers to potentially power their studies differen-
tially based on the outcome targeted, or perhaps to power a
study based on the smallest anticipated effect.
In parallel with traditional effect size estimates, addi-
tional indicators of effect have emphasized the clinical
meaningfulness of outcomes (Kazdin, 2005). Reports of
tests based on the inferential statistics are included in most
SMH studies, which can indicate statistically significant
differences between groups. However, limitations are
present in that the tests and accompanying effect sizes
School Mental Health (2014) 6:68–83 75
123
cannot indicate the meaningfulness of the outcome. Thus, it
is incumbent upon the SMH researcher to ask several
further questions. First, does the amount of change
observed exceed that which we might attribute to mea-
surement error? There are analyses that can be applied that
take into account the consistency of a given instrument to
produce a threshold above which change can be considered
to be reliable (Evans, Margison, & Barkham, 1998). Sec-
ond, is the amount of change observed clinically or prac-
tically significant? This is more difficult to answer because
it depends on the expected outcome. For a more intensive
intervention, a fundamental determinant of success might
be the extent to which participants have moved from the
‘dysfunctional population’ range to the ‘functional pop-
ulation’ range on a given measure (Humphrey, Hanley,
Lendrum, & Wigelsworth, 2012; Jacobson & Truax, 1991;
Kendall & Grove, 1988). However, for universal SMH
interventions, we might not expect such clinically signifi-
cant change but rather, consider whether change is prac-
tically significant. Hill, Bloom, Black, and Lipsey (2008)
suggest three benchmarks for assessing practical signifi-
cance—normative expectations for change (i.e., how does
the effect of an intervention compare to an equivalent
period of growth for a given target population?), policy-
relevant performance gaps (i.e., does the effect of an
intervention address established differences/inequalities
among particular groups of children and young people,
such as those from different socioeconomic backgrounds?),
and observed effect sizes for similar interventions (i.e., how
do the effects of an intervention compare to those from
previous studies?).
Recently, the What Works Clearinghouse (2013) has
developed a related method for judging the practical sig-
nificance of study results called the ‘improvement index’ to
indicate the expected change in percentile rank for a child in
the comparison group if the child had received the inter-
vention. In a recent clinical trial that investigated how a daily
report card enhanced outcomes for youth with attention-
deficit/hyperactivity disorder (Fabiano et al., 2010), the
average improvement index was ?14 for measures of
external behavior. This index indicates that the addition of
the DRC would improve the external behavior of youth with
ADHD in the comparison group by 14 percentile points (US
Department of Education, Institute of Education Science,
What Works Clearinghouse, 2012). Practically speaking, a
child at the 36th percentile could attain functioning at the
50th percentile in the domain of externalizing behavior with
the addition of a DRC. However, this intervention did not
meaningfully improve children’s academic achievement
(Improvement Index =?3), underscoring the need to
measure across multiple domains in SMH research. Given
that these outcomes that can be interpreted readily by
stakeholders interested in SMH outcomes, it is
recommended that this metric be routinely incorporated in
SMH studies.
A further consideration in interpreting outcomes in
SMH research is the nature of the effect observed. As
before, this will vary as a function of the aims and objec-
tives of a given intervention, the target population, and the
outcome measures utilized. A useful way to classify effects
is along an intervention spectrum that includes promotion,
prevention, treatment, and maintenance (O’Connell, Boat,
& Warner, 2009). So for Intervention A, a preventive effect
(deterioration of functioning in a given target population is
curbed) may be the optimal outcome. For Intervention B, a
promotional effect (growth in a given outcome variable is
observed relative to stability or deterioration in a control
group) could be expected. Additionally, we might also ask
for whom effects have been observed. We do not expect all
students to respond to a SMH intervention in a uniform
fashion, and so analyses that can tap differential response
patterns (such as growth mixture modeling) have high
utility. A useful example can be seen in Muthe
´
n et al.’s
(2002) analysis of a randomized preventive intervention in
Baltimore public schools, which demonstrated that only
students with initially elevated levels of aggression bene-
fited from participation.
Finally, interpretation of outcomes in SMH research can
be considered from an economic perspective. Here, the
researcher might ask about the level of cost incurred
(human, material, financial) in an intervention, and whether
this is justified by the outcome(s) achieved. This consid-
eration is particularly pertinent from a policy perspective:
‘Just as business executives want to know how an
investment would affect their company’s bottom line,
policymakers find it useful to ask not only whether gov-
ernment expenditures have the intended effects but also
whether investing in child and adolescent programs pro-
vides ‘profits’ to the children themselves, to taxpayers, and
to society as a whole’’ (Duncan & Magnuson, 2007, p. 47).
In SMH, the ‘profit’ may be savings associated with
successful prevention of maladaptive outcomes or growth
associated with improved well-being (Humphrey, 2013).
Research has indicated that mental health issues result in
considerable costs for schools (Robb et al., 2011); thus, it is
recommended that future research ascertain metrics that
best quantify cost benefits, cost offsets, and other economic
metrics related to SMH outcomes.
As we have outlined above, SMH researchers increas-
ingly focus on multiple functional domains, necessitating
multi-pronged approaches for outcome analysis. Tradi-
tional estimates of effect size need to be interpreted in light
of the context the data are collected in, and careful atten-
tion must be focused on the intervention as well as com-
parison group. Further, SMH studies should routinely
report clinical and practical significance outcomes. Finally,
76 School Mental Health (2014) 6:68–83
123
estimates of cost and cost benefit are important outcomes in
SMH research given a culture of increasing accountability
of financial expenditures within school settings.
The Foundational Role of Partnerships to SMH
Research
As reviewed throughout the paper, there are many con-
siderations for SMH researchers in the design, measure-
ment, and analysis of studies. To this point, a major
consideration in SMH research has been largely unad-
dressed—that these studies take place in authentic educa-
tional settings with the intervention typically implemented
by indigenous school staff. This approach greatly increases
the external validity of SMH studies, yet SMH researchers
need to be adept at maintaining appropriate levels of
internal validity. Those that have worked in school settings
know this can be difficult given the often competing
demands between the business of schools and the goals of
research. In this section, we describe strategies for realizing
SMH research through illustrative examples of current
work, including lessons learned and areas for continued
research and evaluation.
To conduct valid and reliable research in applied set-
tings such as schools and community agencies, researchers
must develop sound and collaborative relationships with
their research partners—it is within the context of these
partnerships that generalizable SMH research studies can
be realized (Andis et al., 2002). A tension that needs to be
recognized is that schools may have had negative experi-
ences with researchers, who obtain benefits of conducting
studies in schools, without ‘giving back’ in a meaningful
way. It is critical that research reflects actual partnerships
with clear benefits to school stakeholders (e.g., teachers,
mental health staff, families, students) and the research
program (Frazier, Formoso, Birman, & Atkins, 2008; Lever
et al., 2003). Through these partnerships, meaningful les-
sons are ideally learned and findings occur that help to
improve practice and policy, and create ideas to further
build the research avenue.
In this section, we describe themes that are important to
consider when designing and conducting SMH research in
applied settings. Themes are illustrated through two
examples, both involving randomized controlled trials
(RCTs), the first conducted within a school district, and the
second conducted through a partnership between the
mental health system and a school district. From the first
example, we focus on themes of (1) Early Relationship
Development, (2) Assuring Complementarity and Mutual
Support, (3) Increasing Support for Educational Signifi-
cance, and (4) Expanding Collaborative Relationships. The
second example builds on these themes and elaborates on
themes of (5) Adjusting Program Infrastructure and Staff-
ing, (6) Collaborative Participant Recruitment, (7) Maxi-
mizing Implementation Support, and (8) Collaborative
Dissemination. Taken together, these themes support the
realization of the research methodology described above
within SMH settings.
Example One: RCT in a School District
The first example reflects experiences in conducting a ran-
domized controlled trial (RCT) of the Cognitive Behavior
Intervention for Trauma in Schools program (CBITS) in a
large, urban school district. CBITS is a school-based inter-
vention for students who experience acute or chronic trauma
and designed to reduce trauma-related symptoms while
improving emotional/behavioral and school functioning
(Kataoka, et al., 2003; Stein, et al., 2003).
Theme 1: Early Relationship Development
Before approaching the district to discuss conducting a
study of CBITS, the research team spent considerable time
learning about the school district (e.g., reviewing the Web
site, recent press releases, meeting with leaders and school
staff). This process revealed the district’s strong interest in
assisting students coping with trauma, alignment with plans
for the RCT, and this mutual interest set the stage for
further relationship development and moving forward.
Theme 2: Assuring Complementarity and Mutual Support
With support of district leaders, the research team then
sought to expand relationships with leaders and staff in
schools to obtain their support and to plan for integrating
in study procedures without disruption to the daily
activities and processes in school buildings. Based on the
platform of mutual interest and support for assisting stu-
dents with trauma, from the beginning joint discussions
with the research team and school leaders and staff
focused on both successful implementation of the study,
and sustainability beyond the research grant. District
commitment to the project was reflected in the decision to
allow school-employed social workers to be the imple-
menters of CBITS, and the research team worked with the
district to enable training of staff for broader implemen-
tation of the intervention to students not involved in the
study, and past study conclusion.
Theme 3: Increasing support for educational significance
Although the district showed strong interest and support for
assisting students experiencing trauma, an important need,
reflecting one of the major issues in the SMH field, was to
School Mental Health (2014) 6:68–83 77
123
support the educational significance of trauma intervention
for students. Although school leaders, school staff and
families and community stakeholders value improvements
in student social, emotional, and behavioral functioning,
there is a clear priority and policy mandate on academic
achievement, leading to graduation. Previous studies of
CBITS documented improvements in emotional/behavioral
outcomes (e.g., decreased anxiety, depression, and trauma
symptoms (Kataoka, et al., 2003; Stein, et al., 2003).
Changes in academic or school related outcomes were not
typically reported with the exception of a recent investi-
gation where treatment for trauma administered soon after
screening resulted in positive outcomes on academic
grades (Kataoka et al., 2011).
Theme 4: Expanding Collaborative Relationships
Researchers should be vigilant to not just engage school
district and building leaders in gaining support for a study,
but to reach out and expand collaborative relationships with
teachers; school-employed mental health staff; support staff;
and families and students themselves (Brandt, Glimpse,
Fette, Lever, Cammack, & Cox, 2014). These efforts need to
be genuine and sustained and, when done well, help research
staff to integrate well into school buildings, fostering
enhanced implementation (e.g., improving the climate for
student observations, increasing the likelihood that measures
will be completed and returned). Cultural sensitivity of
measures and research procedures and assuring that these are
accepted and understood by families and students is another
critical dimension, often taking considerable time, but again
promoting enhanced collaboration and smooth operation of
research processes (Clauss-Ehlers, Serpell, & Weist, 2013).
Researchers should ensure that all information about the
study sent home is translated to the needed languages,
materials are at appropriate reading levels, calls home are
conducted in the preferred language of families, and parent
and community meetings are also offered in multiple lan-
guages. Strong partnerships with school staff will likely
assist researchers in being informed about appropriate cul-
tural and community needs within participating schools.
When researchers plan projects with these details, it dem-
onstrates to the broad array of stakeholders connected to the
schools that the researchers are being inclusive and
respectful as they are genuinely trying to assist the educa-
tional community.
Example Two: RCT in Mental Health System Working
in the Schools
The above example reflected an RCT within a school dis-
trict. Building from these themes (with all of them
relevant), this example describes additional themes from an
RCT involving the mental health system working in
schools. The study was the second in a series focused on
integrating high-quality, evidence-based practices into the
natural environment of schools (see Weist et al., 2009).
The project focused on improving emotional/behavioral
and school outcomes for youth presenting disruptive
behavior disorders within a large county and school district
in a rural/suburban region in a southern state.
Theme 5: Adjusting Program Infrastructure and Staffing
As outlined above in the discussion of study designs, there are
many considerations within SMH research that may influence
staffing within SMH settings (e.g., random assignment to
groups to one training condition or another; concerns about
contamination across study groups).Theseconcernscan cause
disruption within existing practice in community partner
organizations, and it is important to ensure community part-
ners are informed about research procedures, and the conse-
quences of these procedures, prior to study initiation (see
Lever et al., 2003).
For example, prior to the start of this study, clinicians
and their supervisors were in six teams based on geo-
graphic region, within the very large county served by the
mental health agency. To enable the conduct of the study,
center leaders agreed to reduce the teams to four, two each
for the target versus comparison (enhanced treatment as
usual) conditions, and to eliminate grouping by geography
to facilitate random assignment into two clean groups. For
each of the two conditions, a senior trainer provided twice-
monthly group training, and for the target condition pro-
vided at-school implementation support. Following ran-
domization, there were numerous supervisor–supervisee
matches that needed to be re-assigned, related to clinicians
randomized to the senior trainer for the other study con-
dition. The prior arrangement of geographically based
teams and in many cases having a long-term relationship
with the supervisor (senior trainer) was comfortable for
many staff, and they expressed displeasure about having to
make changes. Such conflicts are not unusual in SMH
research (e.g., a school in a cluster randomized trial is
disappointed in being assigned to a control rather than
intervention group). To deal with these concerns, center
leaders, who were aware of the need to adjust teams based
on the randomization procedures at the outset, reinforced
the importance of the change, provided support to staff in
moving forward and this change happened fairly smoothly,
enabling the research design to move forward. Note that
due to the large number of existing relationships that were
embedded within the study’s random allocation of staff
required study coordinators to also address a key threat to
study validity—diffusion of intervention across conditions
78 School Mental Health (2014) 6:68–83
123
(Campbell & Stanley, 1963; Cook & Campbell, 1979).
Thus, center leaders, senior trainers, and clinician partici-
pants were instructed to work together within an ‘honor
system’ to not share information across conditions, and
this was also monitored through fidelity monitoring to
confirm study integrity in this dimension.
Theme 6: Collaborative Participant Recruitment
Many SMH studies encounter a tension of effectively
implementing study procedures and assuring the integrity
of the independent variable, and at the same time recruiting
adequate numbers of participants to meet goals for appro-
priately powered analyses. With finite resources and
research staff, sometimes these two functions can be in
opposition to each other. For instance, in this study,
beginning recruitment procedures involved only staff hired
for the study. This often involved considerable travel to
schools to obtain parental consent and student assent forms,
and was generally an inefficient strategy related to a good
percentage of no-shows. In response to slow recruitment,
amendments to the research protocol were made to allow
clinicians to directly recruit families into the study, and
recruitment rates improved considerably. Interestingly,
once empowered with the ability to assist with recruitment,
clinicians and center leaders came up with a number of
innovative ideas to recruit participants, including recruiting
them at intake, and in conjunction with physician
appointments at the center, which were required every
6 months. These strategies reflect the commitment of
center leaders for the success of the study and illustrate
how the expertise and innovation of the community part-
ners can be leveraged to improve research outcomes.
Theme 7: Maximizing Implementation Support
A major issue in school and child and adolescent mental
health is the need to move beyond traditional models of
‘supervision’ that have no evidence of effectiveness to
models of implementation support, involving ongoing
training, on-site coaching, administrative and emotional
support, that are emerging as critical to the success of any
intervention (see Fixsen, Naoom, Blase
´
, Friedman, &
Wallace, 2005; Weist, Lever, Bradshaw, & Owens, 2014).
In the target condition of this study, implementation sup-
port involves visits by the senior trainer and research
coordinator to all staff at least six times per year in their
schools. Center leaders allocated funding for travel and
protected the time of the senior trainer to conduct imple-
mentation support, with each implementation support event
usually taking three hours or more including travel time,
again demonstrating their significant commitment to the
success of the study. In addition, the center has supported
increasing peer-to-peer support among participants in the
target condition, granting them the ability to flex their
schedules and assignments to provide support to staff at
other schools.
Theme 8: Collaborative Dissemination
SMH research partners may not be actively involved in
research or the dissemination of research findings at the
initiation of a study, but it is recommended that the partners
are invited to participate in the interpretation of results
and dissemination of findings. This will permit SMH
researchers to benefit from insights from the community
partners and in the interpretation of results. It also provides
a strong voice for the discussion of outcomes that can
speak to the implementation, feasibility, and endorsement
of findings in a manner different from an academic
researcher.
In the this example, although the Children’s Services
Director and two senior trainers had not been previously
involved in research, they became active collaborators with
the research team in making professional presentations, and
beginning to assist in organizing manuscripts from the study.
In this work, care was taken to involve the senior trainer from
the comparison condition in such dissemination activities,
without prematurely sharing too much information from the
target condition, to avoid a potential threat of contamination.
The experience of this research team has been that enabling
opportunities for community partners to present and write
about research is new and greatly appreciated, in turn
enhancing their commitment and demonstrated support
within the collaborative partnership.
Conclusion
This paper outlined methodological innovations and con-
siderations for researchers in the area of SMH, with the
intent to serve as an outline for promoting the collection of
rigorous and actionable data in the field as well as the
generation of new ideas for SMH methodological tech-
niques. Considerations around research designs, methods
for describing effects and outcomes, issues in measurement
of process and outcomes, and the foundational role of
school and community research partnerships were dis-
cussed within the context of SMH research studies. SMH
researchers have a number of tools and strategies available
to answer the pressing research questions within the field.
Whereas there are many tools available for researchers, as
the field evolves, new tools and strategies will be needed.
In spite of the burgeoning research base in the SMH field,
there is still a sizable disconnect between research and
School Mental Health (2014) 6:68–83 79
123
practice (Atkins, Hoagwood, Kutash, & Seidman, 2010),
indicating that research methods need to continue to evolve
to answer questions that move beyond simply Does this
work?’ to a paraphrased version of Gordon Paul’s (1967)
seminal quote ‘How does it work, in which situations, with
whom, for which outcomes, by which implementers, and
can it be sustained?’
References
Achenbach, T. M., McConaughy, S. H., & Howell, C. T. (1987).
Child/adolescent behavioral and emotional problems: Implica-
tions of cross-informant correlations for situational specificity.
Psychological Bulletin, 101, 213–232.
Agency for Healthcare Research. (2013). Methods guide for effec-
tiveness and comparative effectiveness reviews. Rockville, MD:
Agency for Healthcare Research.
Andis, P., Cashman, J., Praschil, R., Oglesby, D., Adelman, H.,
Taylor, L., et al. (2002). A strategic and shared agenda to
advance mental health in schools through family and system
partnerships. International Journal of Mental Health Promotion,
4, 28–35.
Atkins, M. S., Hoagwood, K. E., Kutash, K., & Seidman, E. (2010).
Toward the integration of education and mental health services.
Administration and Policy in Mental Health and Mental Health
Services Research, 37, 40–47.
Berkowitz, H. (1968). A preliminary assessment of the extent of
interaction between child psychiatric clinics and public schools.
Psychology in the Schools, 5, 291–295.
Brandt, N. E., Glimpse, C., Fette, C., Lever, N. A., Cammack, N. L.,
& Cox, J. (2014). Advancing effective family–school–commu-
nity partnerships. In M. Weist, N. Lever, C. Bradshaw, & J.
Owens (Eds.), Handbook of school mental health: Research,
training, practice, and policy (pp. 209–222). New York:
Springer.
Briesch, A. M., Chafouleas, S. M., Neugebauer, S. R., & Riley-
Tillman, T. C. (2013). Assessing influences on intervention use:
Revision of the usage rating profile-intervention. Journal of
School Psychology, 51, 81–96.
Campbell, D., & Stanley, J. (1963). Experimental and quasi-
experimental designs for research. Chicago, IL: Rand-McNally.
Campbell, M. K., Thomson, S., Ramsay, C. R., MacLennan, G. S., &
Grimshaw, J. M. (2004). Sample size calculator for cluster
randomized trials. Computers in Biology and Medicine, 34,
113–125.
Chafouleas, S. M., Briesch, A. M., Riley-Tillman, T. C., Christ, T. C.,
Black, A. C., & Kilgus, S. P. (2010a). An investigation of the
generalizability and dependability of Direct Behavior Rating
Single Item Scales (DBR-SIS) to measure academic engagement
and disruptive behavior of middle school students. Journal of
School Psychology, 48, 219–246. doi:10.1016/j.jsp.2010.02.001.
Chafouleas, S. M., Briesch, A. M., Riley-Tillman, T. C., Christ, T. J.,
Volpe, R. J., & Gresham, F. (2009, August). Review of methods
for formative assessment of child social behavior. In Symposium
presented at the meeting of the American Psychological
Association, Toronto, Canada.
Chafouleas, S. M., Kilgus, S. P., Riley-Tillman, T. C., Jaffery, R., &
Harrison, S. (2012). Preliminary evaluation of various training
components on accuracy of direct behavior ratings. Journal of
School Psychology. doi:10.1016/j.jsp.2011.11.007.
Chafouleas, S. M., Volpe, R. J., Gresham, F. M., & Cook, C. (2010b).
School-based behavioral assessment within problem-solving
models: Current status and future directions. School Psychology
Review, 34, 343–349.
Clauss-Ehlers, C., Serpell, Z., & Weist, M. D. (2013). Handbook of
culturally responsive school mental health: Advancing research,
training, practice, and policy. New York: Springer.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112,
155–159.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation:
Design and analysis issues for field settings. Boston, MA:
Houghton Mifflin.
Cordray, D. S., & Pion, G. M. (2006). Treatment strength and
integrity: Models and methods. In R. R. Bootzin & P.
E. McKnight (Eds.), Strengthening research methodology:
Psychological measurement and evaluation (pp. 103–124).
Washington, D.C.: American Psychological Association.
Craig, W. M., & Pepler, D. J. (1998). Observations of bullying and
victimization in the school yard. Canadian Journal of School
Psychology, 13, 41–59.
Cunningham, C. E., Vaillancourt, T., Rimas, H., Deal, K., Cunning-
ham, L., Short, K., et al. (2009). Modeling the bullying
prevention program preferences of educators: A discrete choice
conjoint experiment. Journal of Abnormal Child Psychology, 37,
929–943.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary
and secondary prevention: Are implementation effects out of
control? Clinical Psychology Review, 18, 23–45.
Donner, A., & Klar, N. (2004). Pitfalls of and controversies in cluster
randomization trials. American Journal of Public Health, 94,
416–422.
Dumas, J. E., Lynch, A. M., Laughlin, J. E., Phillips Smith, E., &
Prinz, R. J. (2001). Promoting intervention fidelity: Conceptual
issues, methods, and preliminary results from the Early Alliance
prevention trial. American Journal of Preventive Medicine, 20,
38–47.
Duncan, G. J., & Magnuson, K. (2007). Penny wise and effect size
foolish. Child Development Perspectives, 1, 46–51.
DuPaul, G. J., Eckert, T. L., & Vilardo, B. (2012). The effects of
school-based interventions for attention deficit hyperactivity
disorder: A meta-analysis 1996–2010. School Psychology
Review, 41, 387–412.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A
review of research on the influence of implementation on
program outcomes and the factors affecting implementation.
American Journal of Community Psychology, 41, 327–350.
Evans, S. W., Allen, J., Moore, S., & Strauss, V. (2005). Measuring
symptoms and functioning of youth with ADHD in middle
schools. Journal of Abnormal Child Psychology, 33, 695–706.
Evans, S. W., Brady, C. E., Harrison, J. R., Bunford, N., Kern, L.,
State, T., et al. (2013). Measuring ADHD and ODD symptoms
and impairment using high school teachers’ ratings. Journal of
Clinical Child and Adolescent Psychology, 42, 197–207.
Evans, C., Margison, F., & Barkham, M. (1998). The contribution of
reliable and clinically significant change methods to evidence-
based mental health. Evidence-Based Mental Health, 1, 70–73.
Fabiano, G. A., Pelham, W. E., Gnagy, E. M., Burrows-MacLean, L.,
Coles, E. K., Chacko, A., et al. (2007). The single and combined
effects of multiple intensities of behavior modification and
multiple intensities of methylphenidate in a classroom setting.
School Psychology Review, 36, 195–216.
Fabiano, G. A., Pelham, W. E., Waschbusch, D., Gnagy, E. M.,
Lahey, B. B., Chronis, A. M., et al. (2006). A practical
impairment measure: Psychometric properties of the impairment
rating scale in samples of children with attention-deficit/
80 School Mental Health (2014) 6:68–83
123
hyperactivity disorder and two school-based samples. Journal of
Clinical Child and Adolescent Psychology, 35, 369–385.
Fabiano, G. A., Vujnovic, R., Pelham, W. E., Waschbusch, D. A.,
Massetti, G. M., Yu, J., et al. (2010). Enhancing the effectiveness
of special education programming for children with ADHD
using a daily report card. School Psychology Review, 39,
219–239.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace,
F. (2005). Implementation research: A synthesis of the literature.
(FMHI Publication #231). Tampa, FL: University of South
Florida, National Implementation Research Network.
Fonagy, P., Twemlow, S. W., Vernberg, E. M., Nelson, J. M., Dill, E.
J., Little, T. D., et al. (2009). A cluster randomized controlled
trial of child-focused psychiatric consultation and a school
systems-focused intervention to reduce aggression. Journal of
Child Psychology and Psychiatry, 50, 607–616.
Frazier, S. L., Formoso, D., Birman, D., & Atkins, M. S. (2008).
Closing the research to practice gap: Redefining feasibility.
Clinical Psychology: Science and Practice, 15, 125–129.
Gottfredson, G. D., & Gottfredson, D. C. (2001). What schools do to
prevent problem behavior and promote safe environments.
Journal of Educational and Psychological Consultation, 12,
313–344.
Gresham, F. M. (1989). Assessment of treatment integrity in school
consultation and prereferral intervention. School Psychology
Review, 18, 37–50.
Hagermoser Sanetti, L. M., & Kratochwill, T. R. (2009). Toward
developing a science of treatment integrity: Introduction to the
special series. School Psychology Review, 38, 445–459.
Hill, C., Bloom, H., Black, A. R., & Lipsey, M. W. (2008). Empirical
benchmarks for interpreting effect sizes in research. Child
Development Perspectives, 2, 172–177.
Humphrey, N. (2013). Social and emotional learning: A critical
appraisal. London: Sage.
Humphrey, N., Hanley, T., Lendrum, A., & Wigelsworth, M. (2012).
Assessing therapeutic outcomes. In T. Hanley, N. Humphrey, &
C. Lennie (Eds.), Adolescent counselling psychology: Theory,
research and practice (pp. 157–170). London: Routledge.
Humphrey, N., Kalambouka, A., Wigelsworth, M., Lendrum, A.,
Lennie, C., & Farrell, P. (2010). New beginnings: Evaluation of
a short social–emotional intervention for primary-aged children.
Educational Psychology, 30, 513–532.
Humphrey, N., Lendrum, A., Barlow, A., Wigelsworth, M., &
Squires, G. (2013). Achievement for All: Improving psychoso-
cial outcomes for students with special educational needs and
disabilities. Research in Developmental Disabilities, 34,
1210–1225.
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A
statistical approach to defining meaningful change in psycho-
therapy research. Journal of Consulting and Clinical Psychol-
ogy, 59, 12–19.
Kataoka, S., Jaycox, L. H., Wong, M., Nadeem, E., Langley, A.,
Tang, L., et al. (2011). Effects on school outcomes in low-
income minority youth: Preliminary findings from a community-
partnered study of a school-based trauma intervention. Ethnicity
and Disease, 21, S1-71–S1-77.
Kataoka, S., Stein, B. D., Jaycox, L. H., Wong, M., Escudero, P.,
Tu, W., et al. (2003). A school-based mental health program
for traumatized Latino immigrant children. Journal of the
American Academy of Child and Adolescent Psychiatry, 42(3),
311–318.
Kazdin, A. E. (1982). Single-case research designs: Methods for
clinical and applied settings. New York, NY: Oxford University
Press.
Kazdin, A. E. (2005). Evidence-based assessment for children and
adolescents: Issues in measurement development and clinical
application. Journal of Clinical Child and Adolescent Psychology,
34, 548–558.
Kendall, P. C., & Grove, W. M. (1988). Normative comparisons in
therapy outcome. Psychological Assessment, 10, 147–158.
Kerry, S., & Bland, J. (1998). Statistics notes: Sample size in cluster
randomisation. British Medical Journal, 316, 549.
Killip, S., Mahfoud, Z., & Pearce, K. (2004). What is an intracluster
correlation coefficient? Crucial concepts for researchers. The
Annals of Family Medicine, 3, 204–208.
Kolko, D. J., Bukstein, O. G., & Barron, J. (1999). Methylphenidate
and behavior modification in children with ADHD and comorbid
ODD and CD: Main and incremental effects across settings.
Journal of the American Academy of Child and Adolescent
Psychiatry, 38, 578–586.
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom,
S. L., Rindskopf, D. M., et al. (2010). Single-case designs
technical documentation. Retrieved from: http://ies.ed.gov/ncee/
wwc/pdf/wwc_scd.pdf.
Lane, K. L., Beebe-Frankenberger, M. E., Lambros, K. M., & Pierson,
M. (2001). Designing effective interventions for children at risk
for antisocial behavior: An integrated model of components
necessary for making valid inferences. Psychology in the
Schools, 38, 365–379.
Langberg, J. M., Vaughn, A. J., Williamson, P., Epstein, J. N., Girio-
Herrera, E., & Becker, S. P. (2011). Refinement of an
organizational skills intervention for adolescents with ADHD
for implementation by school mental health providers. School
Mental Health, 3, 143–155.
Lendrum, A., & Humphrey, N. (2012). The importance of studying
the implementation of school-based interventions. Oxford
Review of Education, 38, 635–652.
Lever, N. A., Adelsheim, S., Prodente, C., Christodulu, K. V.,
Ambrose, M. G., Schlitt, J., et al. (2003). System, agency and
stakeholder collaboration to advance mental health programs in
schools. In M. D. Weist, S. W. Evans, & N. A. Lever (Eds.),
Handbook of school mental health: Advancing practice and
research (pp. 149–162). New York, NY: Springer.
Lipsey, M. W., Puzio, K., Yun, C., Hebert, M. A., Steinka-Fry, K.,
Cole, M. W., et al. (2012). Translating the statistical represen-
tation of effects of education interventions into more readily
interpretable forms (NCSER 2013-3000). Washington, D.C.:
National Center for Special Education Research, Institute of
Education Sciences, U.S. Department of Education.
Lonigan, C. J., Elbert, J. C., & Johnson, S. B. (1998). Empirically
supported psychosocial interventions for children: An overview.
Journal of Clinical Child Psychology, 27, 138–145.
Losen, D. J., & Gillespie, J. (2012, August). Opportunities suspended:
The disparate impact of disciplinary exclusion from school.
University of California, Los Angeles: The Civil Rights Project.
Retrieved from http://civilrightsproject.ucla.edu/resources/pro
jects/center-for-civil-rights-remedies/school-to-prison-folder/fed
eral-reports/upcoming-ccrr-research.
Martens, B. K. (1993). Social labeling, precision of measurement, and
problem-solving: Key issues in the assessment of children’s
emotional problems. School Psychology Review, 22, 308–312.
Mash, E. J., & Hunsley, J. (2005). Evidence-based assessment of
child and adolescent disorders: Issues and challenges. Journal of
Clinical Child and Adolescent Psychology, 34, 362–379.
Miller, F. G., Johnson, A. H., Welsh, M., Chafouleas, S. M., Riley-
Tillman, T. C., & Fabiano, G. (2013, July). Evaluation of
universal screening methods to identify behavioral risk.In
Poster presented at the meeting of the American Psychological
Association, Honolulu, Hawaii.
Miller, F. G., Neugebauer, S. R., Chafouleas, S. M., Briesch, A. M., &
Riley-Tillman, T. C. (2013, July). Examining innovation usage:
Construct validation of the Usage Rating Profile- Assessment.In
School Mental Health (2014) 6:68–83 81
123
Poster presented at the meeting of the American Psychological
Association, Honolulu, Hawaii.
Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome
studies. Clinical Psychology Review, 11, 247–266.
Muijs, D. (2010). Doing quantitative research in education with SPSS
(2nd ed.). London: Sage.
Murphy, S. A. (2005). An experimental design for the development of
adaptive treatment strategies. Statistics in Medicine, 24,
1455–1481.
Muthe
´
n, B., Brown, C. H., Masyn, K., Jo, B., Khoo, S.-T., Yang, C.-
C., et al. (2002). General growth mixture modeling for random-
ized preventive interventions. Biostatistics, 3, 459–475.
Nahum-Shani, I., Qian, M., Almirall, D., Pelham, W. E., Gnagy, E.
M., Fabiano, G. A., et al. (2012a). Q-learning: A data analysis
method for constructing adaptive interventions. Psychological
Methods, 17, 478–494.
Nahum-Shani, I., Qian, M., Almirall, D., Pelham, W. E., Gnagy, E.
M., Fabiano, G. A., et al. (2012b). Experimental design and
primary data analysis methods for comparing adaptive interven-
tions. Psychological Methods, 17, 457–477.
Nelson, J. R., Benner, G. J., Reid, R. C., Epstein, M. H., & Currin, D.
(2002). The convergent validity of office discipline referrals with
the CBCL-TRF. Journal of Emotional and Behavioral Disor-
ders, 10, 181–188.
O’Connell, M., Boat, T., & Warner, K. E. (2009). Preventing mental,
emotional, and behavioral disorders among young people:
Progress and possibilities. Washington, DC: The National
Academies Press.
O’Leary, K. D., & O’Leary, S. G. (1982). Classroom management:
The successful use of behavior modification. New York, NY:
Pergamon Press Inc.
Paul, G. L. (1967). Strategy of outcome research in psychotherapy.
Journal of Consulting Psychology, 31, 109–118.
Pelham, W. E., Carlson, C., Sams, S. E., Vallano, G., Dixon, J., &
Hoza, B. (1993). Separate and combined effects of methylphe-
nidate and behavior modification on boys with ADHD in the
classroom. Journal of Consulting and Clinical Psychology, 61,
506–515.
Pelham, W. E., Fabiano, G. A., & Massetti, G. M. (2005). Evidence-
based assessment of attention-deficit/hyperactivity disorder in
children and adolescents. Journal of Clinical Child and Adoles-
cent Psychology, 34, 449–476.
Pfiffner, L. J., Kaiser, N. M., Burner, C., Zalecki, C., Rooney, M.,
Setty, P., et al. (2011). From clinic to school: Translating a
collaborative school–home behavioral intervention for ADHD.
School Mental Health, 3, 127–142.
Pfiffner, L. J., & O’Leary, S. G. (1987). The efficacy of all-positive
management as a function of the prior use of negative
consequences. Journal of Applied Behavior Analysis, 20,
265–271.
Pfiffner, L. J., Rosen, L. A., & O’Leary, S. G. (1985). The efficacy of an
all-positive approach to classroom management. Journal of Applied
Behavior Analysis, 18, 257–261.
Power, T. J., Blom-Hoffman, J., Clarke, A. T., Riley-Tillman, T. C.,
Kelleher, C., & Manz, P. H. (2005). Reconceptualizing inter-
vention integrity: A partnership-based framework for linking
research with practice. Psychology in the Schools, 42, 495–507.
Power, T. J., Mautone, J. A., Soffer, S. L., Clarke, A. T., Marshall, S.
A., Sharman, J., et al. (2012). A family–school intervention for
children with ADHD: Results of a randomized clinical trial.
Journal of Consulting and Clinical Psychology, 80, 611–623.
Puffer, S., Torgerson, D. J., & Watson, J. (2005). Cluster randomized
controlled trials. Journal of Evaluation in Clinical Practice, 11,
479–483.
Raudenbusch, S. W., Spybrook, J., Congdon, R., Liu, X., Martinez,
A., Bloom, H., et al. (2011). Optimal design plus empirical
evidence (version 3.0). Retrieved from: http://www.wtgrantfoun
dation.org/resources/consultation-service-and-optimal-design.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear
models. Thousands Oaks, CA: Sage.
Riley-Tillman, T. R., & Burns, M. K. (2009). Evaluating educational
interventions: Single-case designs for measuring response to
intervention. New York, NY: The Guilford Press.
Robb, J. A., Sibley, M. H., Pelham, W. E., Foster, E. M., Molina, B.
S. G., Gnagy, E. M., et al. (2011). The estimated annual cost of
ADHD to the U.S. education system. School Mental Health, 3,
169–177.
Schnoes, C., Reid, R., Wagner, M., & Marder, C. (2006). ADHD
among students receiving special education services: A national
survey. Exceptional Children, 72, 483–496.
Schulte, S. C., Easton, J. E., & Parker, J. (2009). Advances in
treatment integrity research: Multidisciplinary perspectives on
the conceptualization, measurement, and enhancement of treat-
ment integrity. School Psychology Review, 38, 460–475.
Sellstro
¨
m, E., & Bremberg, S. (2006). Is there a ‘school effect’ on
pupil outcomes? A review of multilevel studies. Journal of
Epidemiology and Community Health, 60, 149–155.
Shernoff, E. S., Marinez-Lora, A. M., Frazier, S. L., Jakobsons, L. J.,
Atkins, M. S., & Bonner, D. (2011). Teachers supporting
teachers in urban schools: What iterative research designs can
teach us. School Psychology Review, 40, 465–485.
Skiba, R. J., Horner, R. H., Chung, C. G., Rausch, M. K., May, S. L.,
& Tobin, T. (2011). Race is not neutral: A national investigation
of African American and Latino disproportionality in school
discipline. School Psychology Review, 40, 85–107.
Social and Character Development Research Consortium. (2010).
Efficacy of schoolwide programs to promote social and charac-
ter development and reduce problem behavior in elementary
school children (NCER 2011–2001). Washington, D.C.:
National Center for Education Research, Institute of Education
Sciences, U.S. Department of Education.
Spoth, R., & Redmond, C. (1993). Identifying program preferences
through conjoint analysis: Illustrative results from a parent
sample. American Journal of Health Promotion, 8, 124–133.
Stein, B. D., Jaycox, L. H., Kataoka, S. H., Wong, M., Tu, W., Elliott,
M. N., et al. (2003). A mental health intervention for school-
children exposed to violence: A randomized controlled trial.
Journal of the American Medical Association, 290(5), 603–611.
Sugai, G., Horner, R., & Walker, H. (2000). Preventing school
violence: The use of office referral to assess and monitor school-
wide discipline interventions. Journal of Emotional and Behav-
ioral Disorders, 8, 94–101.
U.S. Department of Education, Institute of Education Sciences, &
What Works Clearinghouse. (2012, June). WWC review of
report: Enhancing the effectiveness of special education pro-
gramming for children with attention-deficit/hyperactivity disor-
der using a daily report card. Retrieved from http://whatworks.
ed.gov.
Volpe, R., & Fabiano, G. A. (2013). Daily behavior report cards: An
evidence-based system of assessment and intervention. New York:
The Guilford Press.
Wagner, M., Kutash, K., Duchnowski, A. J., Epstein, M. H., & Sumi,
W. C. (2005). The children and youth we serve: A national
picture of the characteristics of students with emotional distur-
bances receiving special education. Journal of Emotional and
Behavioral Disorders, 13, 79–96.
Waltz, J., Addis, M. E., Koerner, K., & Jacobson, N. S. (1993).
Testing the integrity of a psychotherapy protocol: Assessment of
adherence and competence. Journal of Consulting and Clinical
Psychology, 61, 620–630.
Weist, M. D., Lever, N., Bradshaw, C., & Owens, J. S. (2014). Further
advancing the field of school mental health. In M. Weist, N.
82 School Mental Health (2014) 6:68–83
123
Lever, C. Bradshaw, & J. S. Owes (Eds.), Handbook of school
mental health: Research, training, practice, and policy (pp.
1–16). New York: Springer.
Weist, M. D., Lever, N., Stephan, S., Youngstrom, E., Moore, E.,
Harrison, B., et al. (2009). Formative evaluation of a framework
for high quality, evidence-based services in school mental
health. School Mental Health., 1(3), 196–211.
Weisz, J. R., Thurber, C. A., Sweeney, L., Proffitt, V. D., &
LeGagnoux, G. L. (1997). Brief treatment of mild-to-moderate
child depression using primary and secondary control enhance-
ment training. Journal of Consulting and Clinical Psychology,
65, 703–707.
What Works Clearinghouse. (2010). What Works Clearinghouse:
Procedures and standards handbook (version 2.1). Retrieved
from: http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_
procedures_v2_1_standards_handbook.pdf.
What Works Clearinghouse. (2013). What Works Clearinghouse:
Procedures and standards handbook (version 3.0). Retrieved
from: http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_
procedures_v3_0_draft_standards_handbook.pdf.
Wigelsworth, M., Humphrey, N., Kalambouka, A., & Lendrum, A.
(2010). A review of key issues in the measurement of children’s
social and emotional skills. Educational Psychology in Practice,
26, 173–186.
School Mental Health (2014) 6:68–83 83
123
... • course accreditation requirements and accountability systems (Beasy et al., 2021;Hobbs et al., 2019) that vary across educational systems (Reddig and Vanlone, 2022) • the amount of existing content in ITE curricula, and conflicts between academic and mental health goals (Fabiano et al., 2014) • a narrow focus within curricula on relevant but limited content (such as child abuse), which can potentially support a view that trauma-related material is already addressed (Hobbs et al., 2019) • initial attitudes and beliefs of some teacher educators (Beasy et al., 2021) and beginning teachers and a lack of research evaluating the impact or value of trauma-informed teacher education (Sonsteng-Person and Loomis, 2021) • the challenge of finding time for the collaborative activity needed to develop trauma-informed content, and for collaborative professional development (Beasy et al., 2021) • adapting trauma-informed training materials and techniques from other disciplines for school settings (Sonsteng-Person and Loomis, 2021). ...
... • course accreditation requirements and accountability systems (Beasy et al., 2021;Hobbs et al., 2019); these vary across educational systems (Reddig and Vanlone, 2022) • the amount of existing content in ITE curricula, and conflicts between academic and mental health goals (Fabiano et al., 2014) • a narrow focus within curricula on relevant but limited content (such as child abuse), which can potentially support a view that trauma-related material is already addressed (Hobbs et al., 2019) • initial attitudes and beliefs of some teacher educators (Beasy et al., 2021) and beginning teachers, and a lack of research evaluating the impact or value of trauma-informed teacher education (Sonsteng-Person and Loomis, 2021) • the challenge of finding time for the collaborative activity needed to develop trauma-informed content, and for collaborative professional development (Beasy et al., 2021) • adapting trauma-informed training materials and techniques from other disciplines may meet specific barriers related to school settings (Sonsteng-Person and Loomis, 2021). ...
Article
Full-text available
Trauma-informed practice in education is an area of growing interest in England and internationally. Embracing trauma-informed practice in schools requires trauma and related content to be included in teacher education. Over a period of eight years, a short course was developed and incorporated into the teacher preparation programmes at a large university in England. Through methods of teacher educator self-study and autoethnography, we examine the process of the course’s development and identify mechanisms, enablers and barriers to change in the current policy context of teacher education in England. Important factors that supported change were the gradual development, external collaboration, positive outcomes as a warrant and source of motivation, the development of champions and enthusiasts for trauma-informed practice, and departmental leadership support. Barriers to the development were the constraints of prescribed content on initial teacher education courses, prevailing practices in some schools and settings, challenges in adapting material suitably for all education phases, and some beginning teachers’ responses to personally relevant course content. The successful introduction of the short course demonstrates that inclusion of trauma-informed content in initial teacher education is possible even in an unfavourable policy environment.
... Therefore, interviewing educators offers the opportunity to tease out the desired information, which quantitative methods may not. This approach is also recommended by Fabiano et al. (2014), for studies intended to yield either exploratory or explanatory data. ...
Article
Full-text available
Children spend more time in school than in any other formal setting and, with mental illness in children on the rise, there is more pressure on schools to intervene in student mental health than ever before. In the current study, two phases of semistructured interviews were conducted with school leaders and special educational needs coordinators (Phase 1, N = 23; Phase 2, N = 11), to investigate first‐hand experiences in dealing with student mental illness. Thematic analysis, drawing on Grounded Theory, was used to identify themes. The results identified deprivation as one of the main causes of mental ill‐health in students, with insufficient budgets, inappropriate mental health services, and overly long waiting times as barriers to intervention. Difficulties in identifying appropriate mental health interventions to use in school were also reported. The authors propose a simple four‐point heuristic, for assessing the quality of school‐based mental health interventions to be used by school staff, so that educators can more readily identify appropriate mental health support for their students.
... Adapting trauma-informed training from other disciplines into the education sphere may create unique challenges due to the methodological considerations unique to school settings, which can influence the implementation and evaluation of trauma-focused approaches. These methodological considerations include restrictions on the amount of time allotted for trainings due to unions and conflicts between academic and mental health goals (Fabiano et al. 2014). This may make it challenging to translate approaches developed for clinical and other professional settings to school-based settings (Zakszeski et al. 2017). ...
Article
Full-text available
Exposure to trauma, such as community violence, has far-reaching effects on childrens’ learning and behavior. While schools are a critical place to provide positive and safe spaces for students, teachers have self-reported a lack of knowledge on how to work effectively with traumatized students. In response to this, there has been an increase in teacher training on trauma-related topics. However, it is unclear how training impacts teachers’ trauma knowledge and difficulty responding to traumatized students in the classroom. As such, this exploratory study used a survey (N = 94) with Los Angeles teachers to assess whether training on violence and trauma is related to trauma knowledge and reported difficulty responding to traumatized students. Regression analyses indicate that total training increased teachers’ trauma knowledge, which was found to mediate teachers’ difficulty responding to traumatized students. Findings from this study support the need for a focus on trauma-informed training within the education context.
... Children's voices are important in and of themselves (McLaughlin, 2015;Roche, 1999;United Nations, 1989), and child-proxy report literature shows that children do not give doi: 10.1093/cs/cdaa007 © 2020 National Association of Social Workers the same responses as adults (see, for example, Achenbach, McConaughy, & Howell, 1987;Theunissen et al., 1998). International qualitative studies of upper elementary and middle school students'perspectives on participation in school-based anti-bullying (Battey & Ebbeck, 2013), peer mediation (Humphries, 1999), and mindfulness programs (McCabe, Costello, & Roodenburg, 2017), for example, indicate that students can meaningfully reflect on participation in school-based interventions and program acceptability; such qualitative investigations are an important part of program evaluation (Fabiano, Chafouleas, Weist, Sumi, & Humphrey, 2014). Children's views and experiences can and arguably should inform the design and implementation of schooling and school programs (Cook-Sather, 2002), and efforts to highlight them, by Sands, Guzman, Stephens, and Boggs (2007), who have also conducted focus groups with students to inform school efforts, have been described as the "ethical, responsible response." ...
Article
Research has indicated that social and emotional learning (SEL) programs can offer benefits to students and school environments. However, students’ experiences of participation in such programs have not received as much attention. This focus group study describes elementary students’ (N = 23) experiences of and beliefs about participation in a school-based SEL program commonly used in Sweden, Life Skills Training. The results suggest questionable acceptability of the program by the students, indicating a clear belief that the school’s implementation of the program was due to their problematic behavior. Although students experienced the program content as predictable, consistent, and structured, there was great variation in their attitudes toward the program: Both strong negative and positive attitudes were revealed. The students also expressed discomfort with the personal nature of the discussions promoted by the program and uncertainty about its place in the school setting. These findings can inform SEL program implementation.
... Such studies, as well as cross-cultural studies within populations, can only help develop the knowledge base for how best to accommodate the needs and experiences of culturally diverse youth populations, promoting "culturally attuned healthcare services" (Afolabi et al., 2017). In designing any intervention, researchers should consider carefully the methodological choices they make and its suitability for the research question, as this supports comparability across settings (Collishaw, 2015;Fabiano et al., 2014). ...
... (2) a waitlist-controlled trial of COACHES (Fabiano et al., 2012; N = 28); (3) a comparison of fathers and mothers in COACHES versus standard BPT program (Fabiano et al.,unpublished data;N = 37) and (4) a waitlist-controlled trial of COACHES in schools program (Fabiano et al.,in preparation;N = 28). The child treatment outcome literature repeatedly issues calls to leverage randomized clinical trials to elucidate predictors (e.g., Fabiano, Chafouleas, Weist, Sumi, & Humphrey, 2014;Hinshaw, 2002;Kraemer et al., 2002Kraemer et al., , 2006Kazdin & Nock, 2003;Pelham & Fabiano, 2008;Pfiffner, 2014). The study examined fathers specifically because this is an understudied population (Fabiano, 2007;Tiano & McNeil, 2005). ...
Article
Behavioral parent training programs are evidence-based treatment for children with attention-deficit/hyperactivity disorder (ADHD), yet attendance in such programs is variable. Relative to mothers of children with ADHD, far less is known about fathers and what predicts their attendance in treatment. The current study aimed to explore predictors of father (N = 171) attendance using data from four studies that tested the efficacy of behavioral parent training programs aimed specifically at fathers. A hierarchical regression was performed to test four potential predictors of attendance, including father race/ethnicity, father education level, child medication status, and father ratings of the child’s oppositional defiant disorder symptoms. Father education level was determined to be a significant predictor of attendance, whereas father race/ethnicity, child medication status, and father ratings of the child’s ODD behavior were not. The results suggest that future parent training interventions may need to be adapted to improve attendance from fathers of lower education levels.
Chapter
The framework of the Ottawa Charter for Health Promotion has made a significant contribution to shaping school and government programmes in health promotion. The evidence shows that schools can have significant effects in building both the health and educational outcomes of students. This chapter identifies the objects of health promotion research in the school setting and shows that many research methodologies are used to evaluate the effects of school-located health initiatives. It also explores what types of knowledge is gained from school health promotion research and argues it is derived from not only the student but also the way the school itself, within its local community, promotes the health and wellbeing of the students. To gain an understanding of school health promotion programmes, many diverse methodologies are used which are drawn from the fields of health, education, behavioural sciences and management among others. Ethical issues of school health promotion research and its challenges are examined before an exploration of the difficulties in using the considerable body of research evidence to influence policies and practices within the school community. The framework of the Ottawa Charter and the close link between health and educational outcomes in schools are what makes health promotion research in the school setting distinctive.
Article
Full-text available
UNESCO, 2030 eğitim hedeflerine ulaşmak için dünyanın 69 milyon yeni öğretmen istihdam etmesi gerektiğini belirtiyor ancak yapılan araştırmalar mevcut öğretmenlerin %80’inin mesleği bırakmayı düşündüğünü göstermektedir. Öğretmen olmak için başlangıçtaki motivasyona rağmen, ne yazık ki eğitim alma aşamasında ve öğretmenlik mesleğine başlanan ilk yıllarda önemli bir yıpranma söz konusudur. Küresel olarak, öğretmenlerin diğer mesleklere kıyasla en yüksek işle ilgili stres ve tükenmişliğe sahip oldukları bilinmektedir. Ancak etkililiklerini gösteren ve devamlı artan kanıtlara rağmen hem yerli hem de yurt dışı alanyazınında pozitif psikoloji müdahaleleri, öğretmenlerin çalışma ortamlarında nadiren uygulanmış ve incelenmiştir. Öğretmenlerin, rollerinin getirdiği artan stres faktörlerini yönetebilmek için daha etkili ve işlevsel yollara ihtiyaçları olduğu açıktır. Ne yazık ki öğretmenlerin psikolojik kapasitesini geliştirmeye odaklanan herhangi bir profesyonel müdahale, stratejik olan bir tüm okul iyi oluş planının parçası olmak yerine, bir lüks veya eklenti olarak görülmektedir. Buradan hareketle bu çalışmanın temel çıkış noktası, öğretmenlerin iyi oluş, psikolojik sağlamlık ve öz-yeterliklerine yönelik tehditlerin ortaya konulması, PERMA yaklaşımı temel alınarak okullarda öğretmenlerin iyi oluşlarını desteklemek için çeşitli müdahalelerin önerilmesi ve ulusal düzeyde yaygınlaştırılmasına dikkat çekilmesidir. Pozitif psikoloji araştırmalarından aldığı destek sayesinde bu çalışma, öğretmenlerin okullarında yaşadıkları aşırı strese ve tükenmişliğe karşı tampon yapmak adına kendilerine yönelik koruyucu faktörleri geliştirmeyi öğrenebilecekleri bazı müdahale örnekleri sunmaktadır.
Chapter
This review provides new insights to direct policymakers in the fields of higher education, especially with regard to students' mental health. Despite the current scarcity of published original researches about the COVID-19 impacts on mental health, lessons can be learned. Hopefully findings would help students to embrace the pandemic new experience so that they could engage more in learning and be safer. Specifically, the bottom line is how to balance between prioritizing student safety, managing mental health conditions, and long-term academic success of our students.
Article
Aim: This study aimed to investigate the impact of a group-based Autogenic Training (AT) relaxation intervention on levels of anxiety in adolescents in mainstream school settings. Method: A mixed-methods design was used to measure differences in levels of anxiety and explore a range of perceived changes between groups over time. 66 young people aged between 14 and 15 years old from 4 mainstream schools in the UK were randomly assigned within each school to a treatment or wait-list control group. Quantitative data was analysed using a mixed between-within subjects ANOVA. Qualitative information from 12 volunteer participants was analysed using Thematic Analysis. Findings: Results showed a main effect of time for both the treatment group and the wait-list group however no significant main interaction was found. Qualitative results showed perceived improvements in social relationships and connectivity; reflectiveness; selfawareness; physiological symptoms; and a sense of control. Limitations: Measures were reliant on self-reported data. Schools were recruited through self-referral and expression of interest excluding participants who may not have the opportunity to take part. There were no opportunities to collect follow up data. Conclusions: Results suggest that a structured AT relaxation intervention delivered in a familiar school environment may significantly reduce levels of anxiety amongst adolescents. However, significant improvements for the wait-list group also raises questions around the potential of other supportive variables such as acknowledgement and validation of feelings; the promise and availability of forthcoming support; and the potential impact of raised awareness and interest in pupil wellbeing amongst school staff.
Article
Full-text available
Manipulation checks should be used in psychotherapy trials to confirm that therapists followed the treatment manuals and performed the therapy competently. This article is a review of some strategies that have been used to document treatment integrity; also, their limitations are discussed here. Recommendations for improving these checks are presented. Specific guidelines are offered regarding when and how to assess both therapist adherence to treatment protocols and competence.
Article
This article discusses key issues and assumptions which underlie the use of behavior rating scales for assessing children's emotional problems. Three arguments are advanced supporting the use of structured informant reports in social-emotional assessment: (a) informant reports initiate intervention efforts by mental health professionals in general; (b) reliable instruments reduce competing explanations for what is observed; and (c) differences across informants have implications for the design of treatment. Recent revisions to the Child Behavior Checklist, Teacher's Report Form, and Youth Self-Report are evaluated in light of the issues discussed.
Article
A meta-analysis evaluating the effects of school-based interventions for students with attention deficit hyperactivity disorder was conducted by examining 60 outcome studies between 1996 and 2010 that yielded 85 effect sizes. Separate analyses were performed for studies employing between-subjects, within-subjects, and single-subject experimental designs. The overall mean effect sizes for dependent measures of behavior were positive and significant for within-subjects (0.72) and single-subject (2.20) designs, but not for between-subjects (0.18) designs. Mean effect sizes for academic outcomes were positive but not significant for between-subjects (0.43) and within-subjects (0.42) design studies, but were positive and significant for single-subject (3.48) design studies. Contingency management, academic intervention, and cognitive-behavioral intervention strategies were all associated with positive effects for academic and behavioral outcomes. Other moderators (e.g., school setting, publication status) are discussed along with implications for school-based management of students with attention deficit hyperactivity disorder and future treatment studies for this population.
Book
Schools across the United States - as well as much of the world - are experiencing widespread change. Students are more diverse ethnically, academically, and emotionally. More attention is being paid to abuse and neglect, violence and bullying, and the growing inequities that contribute to student dropout. Within this changing landscape, cultural competence is imperative for school-based professionals, both ethically and as mandated by educational reform. The Handbook of Culturally Responsive School Mental Health explores the academic and behavioral challenges of an increasingly diverse school environment, offering workable, cost-effective solutions in an accessible, well-organized format. This timely volume updates the research on cultural competence in school-based interventions, describes innovative approaches to counseling and classroom life, and demonstrates how this knowledge is used in successful programs with children, adolescents, and their families. Populations covered range widely, from African American and Asian American/Pacific Islander families to forced migrants and children who live on military bases. By addressing issues of training and policy as well as research and practice, contributors present a variety of topics that are salient, engaging, and applicable to contemporary experience, including: Adolescent ethnic/racial identity development. Culturally responsive school mental health in rural communities. Working with LGBT youth in school settings. Cultural competence in work with youth gangs. Culturally integrated substance abuse prevention and sex education programs. Promoting culturally competent school-based assessment. School-based behavioral health care in overseas military bases. Developmental, legal, and linguistic considerations in work with forced migrant children. Cultural considerations in work/family balance. The Handbook of Culturally Responsive School Mental Health is a must-have reference for researchers, scientist-practitioners, educational policymakers, and graduate students in child and school psychology; educational psychology; pediatrics/school nursing; social work; counseling/therapy; teaching and teacher education; and educational administration. © Springer Science+Business Media New York 2013. All rights reserved.
Book
Mental health and substance use disorders among children, youth, and young adults are major threats to the health and well-being of younger populations which often carryover into adulthood. The costs of treatment for mental health and addictive disorders, which create an enormous burden on the affected individuals, their families, and society, have stimulated increasing interest in prevention practices that can impede the onset or reduce the severity of the disorders. Prevention practices have emerged in a variety of settings, including programs for selected at-risk populations (such as children and youth in the child welfare system), school-based interventions, interventions in primary care settings, and community services designed to address a broad array of mental health needs and populations. Preventing Mental, Emotional, and Behavioral Disorders Among Young People updates a 1994 Institute of Medicine book, Reducing Risks for Mental Disorders, focusing special attention on the research base and program experience with younger populations that have emerged since that time. Researchers, such as those involved in prevention science, mental health, education, substance abuse, juvenile justice, health, child and youth development, as well as policy makers involved in state and local mental health, substance abuse, welfare, education, and justice will depend on this updated information on the status of research and suggested directions for the field of mental health and prevention of disorders. © 2009 by the National Academy of Sciences. All rights reserved.
Book
This accessible and authoritative introduction is essential for education students and researchers needing to use quantitative methods for the first time. Using datasets from real-life educational research and avoiding the use of mathematical formulae, the author guides students through the essential techniques that they will need to know, explaining each procedure using the latest version of SPSS. The datasets can also be downloaded from the book's website, enabling students to practice the techniques for themselves. This revised and updated second edition now also includes more advanced methods such as log linear analysis, logistic regression, and canonical correlation. Written specifically for those with no prior experience of quantitative research, this book is ideal for education students and researchers in this field