Free, Brief, and Validated: Standardized Instruments for Low-Resource Mental Health Settings

University of Pennsylvania
Cognitive and Behavioral Practice (Impact Factor: 1.33). 03/2014; 22(1). DOI: 10.1016/j.cbpra.2014.02.002

ABSTRACT Evidence-based assessment has received little attention despite its critical importance to the evidence-based practice movement. Given the limited resources in the public sector, it is necessary for evidence-based assessment to utilize tools with established reliability and validity metrics that are free, easily accessible, and brief. We review tools that meet these criteria for youth and adult mental health for the most prevalent mental health disorders to provide a clinical guide and reference for the selection of assessment tools for public sector settings. We also discuss recommendations for how to move forward the evidence-based assessment agenda.

402 Reads
  • Source
    • "Scott & Lewis and may be used in lieu of more expensive, copyrighted measures. A list of free measures can be found in the review paper included in this journal's special section (Beidas et al.,2015–in this issue). Overall, organizational resources may significantly limit the type and extent of MBC that can be implemented; however, small efforts to apply MBC (i.e., monitoring symptom change using idiographic assessments) may be beneficial for improving client outcomes (Weisz, Chorpita, et al., 2011). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Measurement-based care (MBC) can be defined as the practice of basing clinical care on client data collected throughout treatment. MBC is considered a core component of numerous evidence-based practices (e.g., Beck & Beck, 2011; Klerman, Weissman, Rounsaville, & Chevron, 1984) and has emerging empirical support as an evidence-based framework that can be added to any treatment (Lambert et al., 2003, Trivedi et al., 2007). The observed benefits of MBC are numerous. MBC provides insight into treatment progress, highlights ongoing treatment targets, reduces symptom deterioration, and improves client outcomes (Lambert et al.). Moreover, as a framework to guide treatment, MBC has transtheoretical and transdiagnostic relevance with broad reach across clinical settings. Although MBC has primarily focused on assessing symptoms (e.g., depression, anxiety), MBC can also be used to assess valuable information about (a) symptoms, (b) functioning and satisfaction with life, (c) putative mechanisms of change (e.g., readiness to change), and (d) the treatment process (e.g., session feedback, working alliance). This paper provides an overview of the benefits and challenges of MBC implementation when conceptualized as a transtheoretical and transdiagnostic framework for evaluating client therapy progress and outcomes across these four domains. The empirical support for MBC use is briefly reviewed, an adult case example is presented to serve as a guide for successful implementation of MBC in clinical practice, and future directions to maximize MBC utility are discussed.
    Cognitive and Behavioral Practice 02/2014; 22(1). DOI:10.1016/j.cbpra.2014.01.010 · 1.33 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.
    Implementation Science 07/2015; 10(1):102. DOI:10.1186/s13012-015-0287-0 · 4.12 Impact Factor