Article

Reinterpreting the Fort Bragg Evaluation findings: the message does not change.

Vanderbilt Institute for Public Policy Studies, Vanderbilt University, Nashville, TN 37212, USA.
The Journal of Mental Health Administration 02/1996; 23(1):137-45.
Source: PubMed
0 Followers
 · 
41 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article examines research on systems of care, which are acknowledged as the current dominant paradigm in the delivery of children's mental health services. The authors conclude that systems of care produce important system-level changes. Early results suggest that these systems changes do not impact clinical outcomes, however. One plausible explanation for this finding is that system interventions are too far removed from the actual delivered services, thereby limiting their potential impact. Moreover, numerous assumptions underlying the purported effectiveness of systems of care remain unvalidated. The authors propose that the primary direction to improving children's mental health services should be through effectiveness research, in contrast to continued large-scale investments in systems research and development. Recommendations are made for addressing methodological problems that researchers will confront and for developing policies encouraging future research on the effectiveness of children's mental health services.
    Applied and Preventive Psychology 12/1997; DOI:10.1016/S0962-1849(05)80062-9 · 2.27 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The evaluation of programs or interventions relies on the use of reliable and valid measures, appropriate sampling and design strategies, and careful assessment of the strength and fidelity of the intervention. The first 2 components form the basis of most training in behavioral research and have been given much attention by the field. This last component of the evaluation process is not typically considered when assessing intervention impact. Treatment strength refers to the intensity of the intervention relative to the magnitude of the problem that it is intended to correct, whereas treatment fidelity refers to the level in which the treatment as implemented matches the treatment as intended. In applied research, an assessment of treatment strength must be made relative to the strength of the usual care circumstances or the "relative strength" of the program. This is an important distinction because research rarely, if ever, has control over all the services that might be available to those in treatment (experimental) or usual care (control-comparison) conditions. Program fidelity provides another level of detail about the program as implemented by examining the level to which interventions as implemented compare to intervention as theoretically planned. This is especially important in developmental science where theory typically forms the basis of intervention. Often, research uses a binary code for the level of treatment strength and program fidelity. By ignoring the variance that occurs along these dimensions of interventions, errors in detecting effectiveness of interventions are more likely due to the influence that these constructs have on statistical power.
    Applied Developmental Science 04/2003; 7(2):55-61. DOI:10.1207/S1532480XADS0702_2 · 0.63 Impact Factor
  • Research in community and mental health 12/2006; 14:201-237. DOI:10.1016/S0192-0812(06)14010-6