Validating Health Insurance Coverage Survey Estimates: A Comparison of Self-Reported Coverage and Administrative Data Records

Public Opinion Quarterly (Impact Factor: 2.25). 08/2009; DOI: 10.1093/poq/nfn013

ABSTRACT We administered a health insurance coverage survey module to a sample of 4,575 adult Blue Cross and Blue Shield of Minnesota (BCBS) members to examine if people who have health insurance coverage self-report that they are uninsured. We were also interested in whether respondents correctly classify themselves as having commercial, Medicare, MinnesotaCare, and/or Medicaid coverage (the four sample strata). The BCBS of Minnesota sample is drawn from both public and commercial health insurance coverage strata that are important to policy research involving survey data. Our findings support the validity of our health insurance module for determining whether someone who has health insurance is correctly coded as having health insurance coverage, as only 0.4 percent of the BCBS members answered the survey as though they were uninsured. However, we find problems for researchers interested in using survey responses to specific types of public coverage. For example, 21 percent of the Medicaid self-reported coverage came from known enrollees and only 67 percent of the MinnesotaCare self-reported count came from known enrollees. We conclude with a discussion of the study's implications for understanding the Medicaid “undercount” and the validity of self-reported health insurance coverage.

Download full-text


Available from: Jeanette Ziegenfuss, Jul 06, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To assess reasons why survey estimates of Medicaid enrollment are 43 percent lower than raw Medicaid program enrollment counts (i.e., "Medicaid undercount"). Linked 2000-2002 Medicaid Statistical Information System (MSIS) and the 2001-2002 Current Population Survey (CPS). Centers for Medicare and Medicaid Services provided the Census Bureau with its MSIS file. The Census Bureau linked the MSIS to the CPS data within its secure data analysis facilities. We analyzed how often Medicaid enrollees incorrectly answer the CPS health insurance item and imperfect concept alignment (e.g., inclusion in the MSIS of people who are not included in the CPS sample frame and people who were enrolled in Medicaid in more than one state during the year). The extent to which the Medicaid enrollee data were adjusted for imperfect concept alignment reduces the raw Medicaid undercount considerably (by 12 percentage points). However, survey response errors play an even larger role with 43 percent of Medicaid enrollees answering the CPS as though they were not enrolled and 17 percent reported being uninsured. The CPS is widely used for health policy analysis but is a poor measure of Medicaid enrollment at any time during the year because many people who are enrolled in Medicaid fail to report it and may be incorrectly coded as being uninsured. This discrepancy should be considered when using the CPS for policy research.
    Health Services Research 02/2009; 44(3):965-87. DOI:10.1111/j.1475-6773.2008.00941.x · 2.49 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Record check studies show that survey estimates of enrollment in government-assistance programs tend to be lower than those compiled from records used for program administration, and this undercount is especially apparent for Medicaid. Studies specific to Medicaid point to false-negative reporting about enrollees in surveys as the main explanation, however their results differ with respect to findings about the level of this response error. It is unclear how much study differences owe to genuine discrepancies in how different surveys measure Medicaid versus being artifacts of different methods for measuring the undercount. This study helps to clarify this question by comparing the results of using one set of variables, derived from the same administrative database, to separately model Medicaid misreport in two different surveys: the Current Population Survey Annual Social and Economic Supplement (CPS ASEC) and in the National Health Interview Survey (NHIS), both fielded in 2001. Results suggest that survey design has an important effect on Medicaid reporting, and most notably that differences in reference period are enough to explain differences in the probability of false-negative reporting in CPS and NHIS, controlling for sample differences. Results corroborate findings that the probability of misreport depends on particular characteristics of the enrollee, and also add evidence suggesting that many enrollee-related predictors of misreport may be quite robust to some differences in survey design. To learn more about the relationships between specific features of survey design and false negative reporting, these results imply that it would be fruitful to look closer at the effect of having private insurance (at the same time as Medicaid) and of being enrolled in other assistance programs.