Treatment services: triangulation of methods when there is no gold standard.

Brandeis University, Institute for Behavioral Health, Heller School for Social Policy and Management, Waltham, Massachusetts 02454, USA.
Substance Use &amp Misuse (Impact Factor: 1.11). 11/2010; 46(5):620-32. DOI: 10.3109/10826084.2010.528119
Source: PubMed

ABSTRACT Information about treatment services can be ascertained in several ways. We examine the level of agreement among data on substance user treatment services collected via multiple methods and respondents in the nationally representative Alcohol and Drug Services Study (ADSS, 1996-1999), and potential reasons for discrepancies. Data were obtained separately from facility director reports, treatment record abstracts, and client interviews. Concordance was generally acceptable across methods and respondents. Although any of these methods should be adequate, additional information is gleaned from multiple sources.

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.
    Biometrics 04/1977; 33(1):159-74. · 1.41 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Because it corrects for chance agreement, kappa (kappa) is a useful statistic for calculating interrater concordance. However, kappa has been criticized because its computed value is a function not only of sensitivity and specificity, but also the prevalence, or base rate, of the illness of interest in the particular population under study. For example, it has been shown for a hypothetical case in which sensitivity and specificity remain constant at .95 each, that kappa falls from .81 to .14 when the prevalence drops from 50% to 1%. Thus, differing values of kappa may be entirely due to differences in prevalence. Calculation of agreement presents different problems depending on whether one is studying reliability or validity. We discuss quantification of agreement in the pure validity case, the pure reliability case, and those studies that fall somewhere between. As a way of minimizing the base rate problem, we propose a statistic for the quantification of agreement (the Y statistic), which can be related to kappa but which is completely independent of prevalence in the case of validity studies and relatively so in the case of reliability.
    Archives of General Psychiatry 08/1985; 42(7):725-8. · 13.77 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this study, a method was developed to identify health plan members with hypertension from insurance claims, using medical records and a patient survey for validation. A sample of 2,079 patients from two study sites with medical service or pharmacy claims indicating a diagnosis of essential hypertension were surveyed, and the medical records of 182 of the 1,275 survey respondents were reviewed. Where the criteria to identify hypertensive patients used both the medical and pharmacy claims, there was 96% agreement with either the medical record or the patient survey. Where the criteria relied on medical claims alone, the agreement rate decreased to 74% with the medical record and 64% with the patient survey. Where the criteria relied on the pharmacy claims alone, the agreement rate was 67% with the medical record and 75% with the patient survey. Combined evidence from medical service and pharmacy claims yielded a high level of agreement with alternative, more costly sources of data in identifying patients with essential hypertension. As it is more thoroughly investigated, claims data should become a more widely accepted resource for epidemiologic research.
    Medical Care 07/1993; 31(6):498-507. · 3.23 Impact Factor