Validating Health Insurance Coverage Survey Estimates:
A Comparison Between Self-Reported Coverage and Administrative Data Records
Michael Davern, Ph.D.
Assistant Professor, University of Minnesota
Kathleen Thiede Call, Ph.D.
Associate Professor, University of Minnesota
University of Minnesota
Gestur Davidson, Ph.D.
Senior Research Associate, University of Minnesota
Timothy J. Beebe, Ph.D.
Associate Professor, Mayo Clinic College of Medicine
Lynn Blewett, Ph.D.
Associate Professor, University of Minnesota
Michael Davern, Ph.D., Assistant Professor, State Health Access Data Assistance Center,
University of Minnesota School of Public Health, 2221 University Avenue SE #345,
Minneapolis, MN 55414; phone 612-625-4835; fax 612-624-1493; e-mail email@example.com.
The authors are especially grateful to Pete Rode, Steven Foldes, Barb Schillo, Nina Alesci, Jessie
Saul and Ann Kenny for help in compiling the data. Also we would like to thank Cathi Callahan
of ARC and Linda Giannarelli of the Urban Institute for their comments on this paper. An
earlier version of this paper was presented at the annual meeting of the American Association of
Public Opinion Research Annual Meeting in Phoenix, AZ, May 2004 by Kathleen Thiede Call,
and at the American Enterprise Institute on April 8th 2005 by Michael Davern. Preparation of
this manuscript was funded by Grant no. 038846 from The Robert Wood Johnson Foundation.
Validating Health Insurance Coverage Survey Estimates:
A Comparison Between Self-Reported Coverage and Administrative Data Records
We administered a health insurance coverage survey module to a sample of 4,575 adult
Blue Cross and Blue Shield of Minnesota (BCBS) members to examine if people who have
health insurance coverage self-report that they are uninsured. We were also interested in
whether respondents correctly classify themselves as having commercial, Medicare,
MinnesotaCare, and/or Medicaid coverage (the four sample strata). The BCBS of Minnesota
sample is drawn from both public and commercial health insurance coverage strata that are
important to policy research involving survey data. Our findings support the validity of our
health insurance module for determining whether someone who has health insurance is correctly
coded as having health insurance coverage. While just 0.4% of the BCBS members answered
the survey as though they were uninsured, we find problems for researchers interested in using
specific self-reported types of coverage. For example, 49% of the people on MinnesotaCare
reported having Medicaid/PMAP coverage and 50% reported having commercial coverage. We
conclude with a discussion of the study’s implications for understanding the Medicaid
“undercount” and suggestions for altering the design of surveys of health insurance coverage in
order to improve the validity of the types of self-reported coverage.
Knowing who lacks insurance coverage is essential for health services research and
health policy analysis. The only way to enumerate this population—as there is no list of
uninsured people throughout the country—is through the use of a general population survey.
Many government-sponsored general population surveys⎯such as the Current Population
Survey’s Annual Social and Economic Supplement (CPS-ASEC), the National Health Interview
Survey (NHIS), the Medical Expenditure Panel Survey-Household Component (MEPS-HC), the
Survey of Income and Program Participation (SIPP) and state-specific surveys⎯currently collect
data on whether respondents have health insurance coverage (see Blewett et al. 2004 for a
review). Estimates from these surveys form the core of our comprehensive knowledge about the
number and characteristics of people lacking health insurance coverage. These surveys are
widely used to simulate various policy options, distribute funding to states for public health
insurance programs, and to evaluate whether specific policies have been successful in achieving
stated goals (Blewett et al. 2004; Davern et al. 2003).
Because surveys are central to our understanding of health insurance coverage, validation
of these measures is essential. In this paper we first examine whether people with known health
insurance coverage tend to correctly classify themselves as being insured. Specifically, we look
at stratified sample of Minnesota adults age 18 and over enrolled in both public and commercial
health insurance products within one specific plan. Surveys generally follow a “conventional”
measurement approach in which an exhaustive list of types of health insurance coverage are read
and respondents can say yes or no to having that type of coverage. At the end of the series a
verification question is asked to determine whether a person who said “no” to all the different
insurance types actually considers themselves uninsured.
In addition to examining whether people known to have health insurance coverage
accurately self-report having coverage, we also explore whether people are able to accurately
self-report the type of health insurance coverage they have. Health insurance coverage in the
United States is comprised of a complex array of government-sponsored health insurance
programs and commercially purchased health insurance products, which may lead to confusion
about the specific type of insurance a person has. Furthermore, people can⎯and many do⎯have
multiple types of coverage, both public and commercial. To measure this complex concept,
conventional survey questionnaires usually contain items asking whether respondents have any
of the many types of coverage, allowing the respondents to answer affirmatively to more than
There are two major challenges to the perceived validity of conventional surveys of
health insurance coverage. First, there are multiple surveys for the same geographic area that can
and often do produce different estimates of health insurance coverage (Lewis, Ellwood and
Czajka 1998). Second, population surveys are thought to “undercount” the number of people
enrolled in public health programs according to enrollment data (Blumberg and Cynamon 1999;
Call et al. 2002; Lewis, Ellwood and Czajka 1998).
Many surveys collect detailed information on health insurance coverage and much of
their data are in the public domain and easily accessible (Blewett et al. 2004). Ironically, it is the
very wealth of survey data in this area that has served to undermine their perceived validity. The
many surveys that measure health insurance coverage produce different estimates of the rates of
uninsurance. Despite many attempts to explain why survey estimates differ—Nelson et al. 2003;
Congressional Budget Office 2003; Fronstin 2000; Lewis, Elwood and Czajka 1998; and Farley-
Short 2001—this issue has not been settled. There are many potential reasons why survey
estimates can vary, but a rigorous accounting of the relative importance of them has not yet been
In addition to conflicting estimates coming from various surveys, counts of program
participation produced by surveys are consistently different than administrative data.
Specifically, surveys usually undercount the number of people enrolled in public programs (e.g.,
Medicaid, food stamps, welfare) when compared to program enrollment data (Blumberg and
Cynamon 1999; Call et al. 2002; Lewis, Ellwood and Czajka 1998). Although the undercounting
of public program participation in surveys of health insurance coverage is not important for
determining the number of individuals enrolled in Medicaid (enrollment data should be used for
this purpose), surveys provide the only estimate of those lacking insurance and the extent to
which programs are reaching their target populations. If, as it is often assumed, a significant
number of survey respondents with Medicaid coverage report that they do not have coverage,
then the survey may overestimate the rate of those who are uninsured and eligible for a program.
On the other hand, if survey respondents who are Medicaid enrollees report that they have other
types of public (e.g., Medicare) or commercial health insurance, then estimates of these coverage
types will be higher than they should be, but the overall uninsured estimate would be unaffected.
The undercount and confusion over the various survey estimates of the uninsured have
led to severe criticism of the major survey estimates of health insurance coverage. A particularly
pointed example was included in a research report released by the Heritage Foundation in August
2004 to coincide with the Census Bureau’s annual release of the CPS-ASEC estimates of the
number of uninsured:
“At the very least, the undercounting of Medicaid recipients and the undercounting of
insurance coverage…demonstrate that the Census Bureau’s figures on the uninsured do
not accurately reflect reality and may lead policymakers and the public to incorrect
impressions about the uninsured. Policymakers and policy experts have no excuse for not
owning up to this fact and should supply it as a major caveat whenever making use of the
Census data on the uninsured.” (Hunter 2004, p. 3).
Another example of this sentiment, extended to health insurance surveys in general, comes
from the recent US Congress’ Joint Economic Committee report on the uninsured:
“Methodologies for estimating the number of uninsured suffer from several shortcomings
that may lead them to overestimate the number of uninsured. Many respondents are
unsure of or forget their insurance status, which makes surveys tend to overestimate the
ranks of the uninsured. Those eligible for Medicaid, in particular, may report themselves
as uninsured… Indeed, fewer people indicate in surveys that they have Medicaid than are
accounted for by the Medicaid program.” (Joint Economic Committee 2004, p. 2)
We use a unique data set of adults enrolled in Blue Cross and Blue Shield of Minnesota
(BCBS) that allows us to answer three related research questions that speak to this broader
question of the merit of conventional survey estimates of health insurance coverage. First, at
what rate do individuals who actually have specific types of commercial and public health
insurance coverage report that they are uninsured in surveys? Second, at what rate do individuals
with specific types of coverage respond that they are insured, but report having coverage they are
not known to be enrolled in (e.g., someone know to be enrolled in PMAP/Medicaid reporting
that they have insurance coverage through an employer)?1 Although past research has examined
accuracy of reporting among public program enrollees in a similar fashion (Klerman 2005;
Eberly 2005; Call et al. 2002; Card, Hildreth and Shore-Sheppard 2001; Blumberg and Cynamon
1999) no published report has systematically examined both public and commercial enrollment
as we are able to with this data set. Finally and given this information, how can conventional
health insurance survey instruments be improved in the future?
DATA AND METHODS
The source of our data is the 2003 Minnesota Adult Tobacco Survey (MATS). This
cross-sectional survey was designed to estimate smoking prevalence rates and tobacco-related
behaviors and beliefs of BCBS health plan members 18 years of age and older. As part of this
analysis we included a health insurance module on the MATS survey that allows us to validate
the survey self-reported health insurance coverage against BCBS’s administrative records. This
module forms the core of the Coordinated State Coverage Survey (CSCS) that has been fielded
in at least 12 states over the past ten years (State Health Access Data Assistance Center 2005).
The MATS survey drew a stratified random sample from four major strata of BCBS
members: 1) people 18-64 years of age with commercial health insurance coverage (e.g.,
employer-sponsored and privately-purchased); 2) people 65 years of age and older with
commercial insurance coverage (mainly those people with privately purchased Medicare
supplemental coverage, but also including seniors with employer-sponsored coverage); 3)
MinnesotaCare enrollees, which is a state-sponsored health insurance program for low-income
1 We base the known enrollment status on the BCBS administrative data. It is possible that there are
errors in recording who has coverage and what type of coverage a person has.
adults and children who are not eligible for Medicaid;2 and 4) Prepaid Medical Assistance
Program (PMAP), which is prepaid Medicaid coverage provided by a managed care
organization. 3 Three of these strata were further broken down into 18-24 year olds versus other
adults, resulting in a total of seven sampling strata.4 Members of BCBS were excluded if they
lived outside the state of Minnesota. Institutionalized members of BCBS and “dual eligible”
Medicaid/Medicare enrollees were excluded from the MATS sample. Table 1 lists the sample
size and total population for each stratum.
--- INSERT TABLE 1 ABOUT HERE ---
The total number of adults enrolled in BCBS health insurance products and eligible to be
sampled for the survey was 897,866, or roughly 24% of the adult population in the state of
Minnesota. Survey weights were created for the respondents selected in the stratified random
sample so that the sample represents the entire BCBS population in the state. Respondents were
weighted relative to their probability of selection into the sample. The person-weight is equal to
the inverse probability of selection. This weight is adjusted through post-stratification to match
known population distributions of a given group. The post-stratifying variables are: 1) gender; 2)
2 Over 80 percent of the adult MinnesotaCare enrollees are enrolled in BCBS.
3 Medicaid Prepaid Medical Assistance Plan (PMAP) enrollees are not a random subset of adult
Medicaid enrollees in Minnesota. In most cases, developmentally and physically disabled Medicaid-
eligible persons are not required to enroll in Medicaid managed care plans – they are allowed to
remain in the fee for service sector. Those enrolled in PMAP are allowed to choose from a number of
insurance carriers, of which BCBS is just one carrier in the state. As of April 2003, Medicaid in
Minnesota had an enrollment of 446,375, of whom 257,605 were in PMAP (58%). Only 17,463 of
these cases are enrolled in BCBS. There is also a long list of the types of MA recipients who are not
required to enroll in PMAP including people who are blind/disabled, in a county not participating in
PAMP are the largest of these groups. For these reasons we carefully interpret our PMAP findings.
4 The survey was designed to obtain detailed information on the smoking habits of 18-24 year olds
making an oversample of this population appropriate.
the total number of adults in each of the sampling strata; and 3) whether the person lived in the
Minneapolis/Saint Paul metropolitan area, another Minnesota metropolitan area, or a Minnesota
non-metro area. All reported estimates are derived from the weighted sample.
The survey was administered by Clearwater Research, Inc.; interviewers used Computer
Assisted Telephone Interviewing (CATI) software, and no proxy responses were allowed. Only
BCBS enrollees with listed telephone numbers were included in the study.5 BCBS does not keep
careful track of current phone numbers and the MATS survey team were able to find listed
numbers for 65% percent of the BCBS records (through using commercial marketing databases
and the National Change of Address file). Interviews were conducted between November 2002
and June 2003 and if an individual sampled BCBS member was no longer a resident of a
household, follow up contact information was requested.
Of the 4,575 completed cases, 235 were later found to no longer be enrolled in BCBS on
the day of the interview. These observations were excluded from the analysis because we could
not validate their self-reported insurance status. In addition, 18 cases were removed from the
analysis because the respondent did not affirmatively answer “yes” to any type of health
coverage but answered “don’t know/not sure” to one or more types of coverage; and eight cases
5 Lepkowski et al. (2005) found very little bias between listed and unlisted telephone numbers in a
large national random digit dial telephone survey in the variables they studies although they were not
specifically interested in health insurance.
were removed because they were under 65 and enrolled in senior supplemental insurance. Our
final analysis sample size was therefore 4,314, representing 860,870 BCBS members.6
The overall response rate calculated using the American Association for Public Opinion
Research’s (AAPOR) (RR4) of the survey was 61.5% and the respondent weights were post-
stratified to equal total enrollees within each of the BCBS strata by region of the state, age and
gender.7 All analysis was done using StataSE 8.0 software to correct for the complex survey
design (StataCorp 2003).
We have two sources of insurance status information for each respondent. The first is the
BCBS health plan administrative data regarding insurance type. For analytical purposes we
break the sampling strata into four analytically useful types of coverage: commercial coverage
for those under 64 years of age; commercial coverage for those 65 years of age and older
(including all the senior supplemental enrollees plus those enrollees over 65 with employer-
sponsored coverage); MinnesotaCare coverage; and Medicaid/PMAP coverage. 8
The second source of coverage data is the respondents’ self-reported insurance status
from the survey questions. The survey questions begin, “I am going to read you a list of
different types of insurance….” As with many health insurance surveys, the interviewer
6 Appendix A contains a table comparing the demographics of the BCBS sample of adults we use in
this analysis to the statewide Random Digit Dial (RDD) survey of adults that was conducted in
parallel as part of the entire 2003 Minnesota Adult Tobacco Survey. The RDD sample size was 5,525
and the AAPOR response rate (RR4) for the RDD component was 51.9%. Compared to the RDD
sample the BCBS sample are more likely to be: white, older, live outside of Minneapolis/Saint Paul
MSA, slightly less educated, slightly lower income, report being insured, and non-smokers. For more
information about the RDD survey see Minnesota Adult Tobacco Survey (2003).
7 Using AAPOR response rate (RR4), the response rate was 61% among commercially insured, 66%
among Minnesota Care enrollees, 58% among Medicaid enrollees, and 74% among senior
supplemental enrollees (AAPOR 2004).
8 BCBS data managers report it is highly unlikely that a sample person’s coverage type would be
innaccurately classified given differences in revenue by sector; however, we have not conducted
an independent evaluation of the classification system.
continues by reading an exhaustive list of different types of insurance (i.e., Medicare, Railroad
Retirement Plan, Medicaid/PMAP, employer sponsored insurance, etc.). The respondent answers
“yes,” “no,” or “don’t know/not sure” to each type of insurance (See Table 2 for the exact
question wording). After the list is read through completely, if the person does not report having
coverage an uninsurance verification item is asked. Answering “yes” to more than one type of
insurance is allowed. If the respondent answers “yes” to having at least one type of health
insurance coverage, then any “don’t know/not sure” and refusals in the series were treated as
--- INSERT TABLE 2 ABOUT HERE ---
Table 3 provides the distribution of self-reported coverage types by the known type of
BCBS health insurance plan a person was enrolled in. Only 0.3% of the commercially insured
(<65), 0.3% of those on MinnesotaCare, 0.5% of the commercially insured (≥ 65), and 0.6% of
those on PMAP/Medicaid self-reported being uninsured. Thus, relying on these self-reports
exclusively, over the entire sample we would count 0.4% of this known insured population as
being uninsured. There are no statistically significant differences between the type of
enrollment and the likelihood of self-reporting no insurance coverage.
--- INSERT TABLE 3 ABOUT HERE ---
Further, focusing on the types of coverage self-reported, only a small percentage of
individuals under age 65 known to have commercial coverage report having public coverage
(Medicare (4.5%), MinnesotaCare (1.8%), or Medicaid/PMAP (2.1%)). However, their relative
impact on the number of people estimated to be in public coverage using the survey responses is
quite large because the commercial enrollees under age 65 are the largest population of people
included in the survey. In contrast, 49.9% of individuals known to have MinnesotaCare self-
report having commercial coverage, 6.8% report Medicare, and 48.8% self-report having
Medicaid. In spite of the large percent of those on MinnesotaCare reporting other types of
coverage, the relative impact on the number estimated to have those types of coverage is minimal
except for the number estimated to have Medicaid.
Fully 99.4% of respondents under age 65 known to be enrolled in commercial health
insurance reported being enrolled in commercial health insurance coverage, compared to 87.9%
of those enrolled in MinnesotaCare and 84.3% of those enrolled in PMAP/Medicaid. Among
those people age 65 or older and known to have commercial coverage, 90.7% report having
commercial coverage and 98.3% report having Medicare (which they likely do have in most
Table 4 shows the percent of people who report either the type of coverage they are
known to be enrolled in and/or the type of coverage they are likely to be enrolled in (i.e. the
BCBS commercial insurance enrollees age 65 and older who are presumed to be enrolled in
Medicare). For respondents under age 65 with commercial insurance, 87.4% of these survey
respondents report exclusively having commercial insurance. Among those age 65 and older
with commercial insurance, 71.7% report exclusively having both Medicare and commercial
insurance. Finally, for Medicaid/PMAP enrollees 34.5% report exclusively having
Medicaid/PMAP coverage, and 24.5% of MinnesotaCare enrollees report exclusively having
---INSERT TABLE 4 ABOUT HERE ---
Column 2 of Table 5 provides the weighted count of those people known to be enrolled in
commercial insurance under age 65, commercial insurance age 65 and older (who are likely to be
enrolled in Medicare), MinnesotaCare, and Medicaid/PMAP. Column 3 provides the weighted
count of respondents who self-reported having that type of coverage regardless of their known
coverage type. That is, this column provides the count of coverage types that would be
generated from a general population survey estimate. The fourth column gives the percent of
respondents self-reporting that type of insurance who are known to be enrolled. As seen, 96.0%
of the total self-reported count of commercial BCBS enrollees under 65 is made up of people
known to be enrolled. For those commercial enrollees 65 years of age and older, 99.9% of the
total count of those self-reporting Medicare and commercial coverage is made up of those known
to be enrolled.
On the other hand, only 20.7% of those self-reporting Medicaid/PMAP were known to be
enrolled and 66.8% of those self-reporting MinnesotaCare were known to be enrolled. In both
cases clearly a considerable percentage of the self-reported count consists of people who are not
known to be enrolled in Medicaid/PMAP or MinnesotaCare, respectively.
--- INSERT TABLE 5 ABOUT HERE ---
Our experiment conducted on Minnesota adult enrollees in BCBS indicates that the
estimates of uninsurance are minimally biased upward as a result of counting people who have
health insurance coverage as being uninsured. Specifically, only 0.4% of our adult respondents
with coverage self-reported they are uninsured. In addition, we find that respondents do well
reporting their “correct” insurance coverage type, although the degree of accuracy varies by the
type of insurance coverage a person is enrolled in. The vast majority of respondents answered
affirmatively to having the type of coverage they were known to be enrolled in, but the percent
exclusively answering that “correct” coverage ranged from 99% of commercial enrollees under
65 to 84% of Medicaid/PMAP enrollees. Our analysis also exposes a large—and unexpected—
impact of respondents who are not known to be enrolled in Medicaid/PMAP and MinnesotaCare
for the self-reported counts of the number of people enrolled in these two programs. This
finding has implications for the “Medicaid undercount” literature and how to think about
different survey estimates and Medicaid enrollment. Finally, our results suggest the need for a
discussion of the design of health insurance survey items. The conventional health insurance
module used in this survey appears effective at measuring uninsurance and enrollment in
commercial coverage. But there are clearly concerns for measuring enrollment in Medicaid or
MinnesotaCare, an SCHIP-like state program, and we discuss our results below within the
context of relevant literature and debates in the field.
Effect on Estimates of the Uninsured
We begin by comparing our findings to the results of other experimental studies that have
examined bias in uninsurance estimates due to people with coverage incorrectly reporting they
are uninsured (Klerman 2005; Eberly 2005; Blumberg and Cynamon 1999; Call et al. 2002).
Blumberg and Cynamon9 found that 4.5% of the parent proxies for a sample of Minnesota
children on Medicaid falsely report no insurance coverage. The Call et al. (2002) study, which
analyzed Minnesota Medicaid enrollees of all ages, found that 4.1% of Medicaid enrollees
reported not having any insurance coverage. The Klerman et al. (2005) study found 21.7% of
those cases they were able to match between the CPS-ASEC and Medi-CAL enrollment data
incorrectly answered the survey as though they were uninsured. Finally, a study by Eberly
(2005) et al. in Maryland found that 4.5% of the Medicaid enrollees they contacted over the
telephone answered the survey as though they were uninsured.
This large range from the seemingly negligible 0.4% in the current study to the high
21.7% in the study by Klerman et al. (2005) may be due to different methodological approaches.
There are good reasons to believe the Klerman et al. (2005) result to be an outlier, with 21.7% of
the adults 15-64 in enrolled in Medi-CAL answering the CPS-ASEC survey as though they were
uninsured. Specifically, the CPS-ASEC produces the highest rate of all-year uninsured people,
and in fact the CPS-ASEC all-year uninsured estimate is higher than some other national survey
estimates of the number of people uninsured at a specific point-in-time (Congressional Budget
Office 2003). The CPS-ASEC asks whether someone had Medicaid (or other types of insurance
coverage) at any time during the last calendar year, with the interviews conducted in March. As
shown by the Klerman et al. (2005) analysis, there is a significant impact of the length of the
reference period for the survey question on the answers provided in the CPS-ASEC. Those
enrolled in Medicaid only in the first few months of the reference period and not again during the
rest of the year were the least likely to report having Medicaid. Thus the CPS-ASEC high
9 The results of three studies are reported by Blumberg and Cynamon. Only the results from the first
study conducted in Minnesota are included here. The uninsurance estimates from the second and third
studies are omitted from this comparison as they are subject to considerable uncertainty as the authors
uninsurance error rate of the Klerman et al. (2005) study is likely to be driven to some extent by
this reference period issue.
Likewise, our 0.4% result could be an outlier due to certain parameters of our study: we
did not allow for proxy interviews (and most other health insurance surveys do allow for proxy
responses), we only interviewed adults, we only sampled BCBS adults with listed telephone
numbers, and the disabled on Medicaid are not enrolled in PMAP. There may also be something
unique about the BCBS of Minnesota population that would lead them to be much more aware of
their insurance status than other insured people throughout the country.
Findings of bias in the uninsurance rates due to people with insurance coverage
answering the survey as though they are uninsured should be taken with caution. The upward
bias in the uninsurance rate is likely to be offset (at least in part) by the potential corresponding
tendency for uninsured people to report having coverage. Unfortunately this bias in the opposite
direction is much harder (if not impossible) to validate. We think that some uninsured people are
likely to respond to the survey as though they have insurance coverage given how health
insurance questionnaires are designed, providing many opportunities to answer “yes, I have
coverage” even if the respondent does not have coverage. Two likely reasons for this outcome
include: 1) “satisficing” (Holbrook, Green and Krosnick 2003); and 2) errors favoring a report of
coverage when the respondent is in fact not covered.
“Satisficing” occurs when respondents choose a socially acceptable response (Holbrook,
Green and Krosnick 2003). The odds of it occurring, in general, increase with the number of
items devoted to a topic. Having health insurance is a socially acceptable answer, and uninsured
people may eventually feel pressured to answer “yes” to having a type of coverage as they are
specifically asked about each type. In addition, it only takes one positive response to a question
about health insurance type for a respondent to be considered insured; one coding error or
misunderstood question during the health insurance survey module could lead to the uninsured
person being coded as insured. Whether it is satisficing or a coding error, there are likely to be
coverage responses recorded for respondents who do not have health insurance coverage. Thus
we hypothesize that the bias in the uninsurance rate that we observed in this paper (and has been
observed elsewhere) is likely offset by insurance coverage responses among uninsured
respondents. The Urban Institute’s TRIM simulation model actually accounts for this likely
phenomenon in their adjustment for the Medicaid undercount (Giannarelli et al. 2005). Not only
do they impute CPS-ASEC respondents reporting that they are uninsured to have Medicaid or
SCHIP but they also impute those reporting coverage to not have coverage.
Partially Correct Responses
To answer the question of how well people are able to accurately report the type of
insurance coverage they have, we found that people are generally able to place themselves
appropriately, but the overall rate varies by insurance coverage type. The percentage of people
who answer correctly varies from 99.4% of those adults under age 65 who are commercially
insured to 84.3% of those who have Medicaid/PMAP. Although lower than the commercially
insured, it is important to remember that the vast majority of Medicaid enrollees know they have
Medicaid (84.3%). This is consistent with Card, Hildreth and Shore-Sheppard (2001), Klerman
et al. (2005) and Eberly (2005). The estimates from these studies for correctly reporting
Medicaid coverage ranged is from 87.5% (Eberly et al. 2005) to 72.3% (Klerman et al. 2005) of
those on Medicaid responding that they have Medicaid. Again, lower accuracy in the Klerman
study is likely attributable to the reference period used in asking about health insurance
These findings are in contrast to Call et al. (2002) who found that 54% of Medicaid
enrollees reported having Medicaid. However, the Call et al. (2002) study allowed only one
“yes” response to a type of coverage and imposed a hierarchy of insurance status so that those
responding “yes” to Medicare were not asked the Medicaid survey item. As shown in the
analysis of the BCBS data (see Table 3) a fair number of Medicaid enrollees—when asked both
the Medicaid and Medicare question—answer yes to both. Therefore our finding that 85% of
Medicaid respondents reported having Medicaid would likely be consistent with Call et al.
(2002) had they used a conventional health insurance question module.
The second lowest rate of accurately reporting known coverage was found for
MinnesotaCare respondents, 88.8% of whom correctly self-report their coverage. Although this
is much lower than the rate for commercial coverage (under age 65 years), it is virtually the same
rate found by Call et al. (2002) for their MinnesotaCare enrollees. Greater accuracy among
MinnesotaCare enrollees is expected as they pay a monthly sliding scale premium to remain
enrolled in the program. Finally, BCBS enrollees age 65 or older—having commercial coverage
and likely having Medicare—are more likely to report Medicare (98%) than commercial
insurance coverage (91%).
Following from our findings we conclude that survey instruments employing a
conventional point-in-time measure of health insurance coverage and that use a “check all that
apply” approach to health insurance coverage measurement – like the MEPS-HC, the CSCS (the
survey module validated in this paper), NHIS, and SIPP – do a good job of gauging whether an
insured person has health insurance coverage. Furthermore, the rate of falsely reporting
uninsurance in our study was low. On the other hand, given the large number of people enrolled
in public programs that report having the wrong type of public coverage raises the issue of
whether estimates of specific public program are possible. Follow-up analysis should be
conducted on whether the federal surveys employing the conventional health insurance survey
module demonstrate the same pattern.
Our expectation is that other federal surveys with point-in-time measures would likely
have significantly less people answering the wrong type of public coverage than we found. This
expectation is based on two things: the federal surveys actually ask respondents during face-to-
face interviews to show their insurance card (e.g., Division of Health Interview Statistics 2003;
Cohen 1997). The NHIS, and the MEPS-HC are exclusively face-to-face and the CPS-ASEC
and SIPP are mainly phone with some surveys conducted face-to-face (see footnote 10 for
details). A second possible explanation of our findings is that many individuals and families
may be confused about their program of enrollment either because they move between programs
as their circumstances (e.g., income) change, or because they may sign up for an SCHIP-like
program but instead meet the eligibility criteria and become enrolled in Medicaid. In Minnesota,
one card is issued to all public program enrollees regardless of their enrollment in Medicaid or
Although our Medicaid analysis is limited to a small sample of Medicaid respondents in
Minnesota, it demonstrates the significant impact on estimates of Medicaid coverage resulting
from self reports of Medicaid among those in commercial plans who are not known to be
enrolled in Medicaid. This is a unique and unexpected contribution of this study. In a typical
survey, an analyst would total up the number of people who answer affirmatively to having
Medicaid to get the number of people estimated to be on Medicaid. In our study, only 21% of
this total number would be made up of those people who are actually known to be enrolled.
Those who were actually known to be enrolled in MinnesotaCare and Medicaid/PMAP
accounted for 67% and 21% of the survey weighted count of self-reported MinnesotaCare and
Medicaid coverage, respectively. In contrast, those actually known to be enrolled in commercial
insurance coverage products made up 96-99% of the commercial enrollment count from
Our findings have puzzling implications for undercount research in that the “undercount”,
as conventionally measured, is probably underestimated. In other words, the difference between
the survey count of those enrolled in Medicaid is likely to be significantly lower after taking out
those who claim to have Medicaid but are not known to be enrolled. In our study it only takes a
relatively small percentage of “yes” responses to Medicaid from those people known to be
enrolled in commercial programs to impact the self-reported Medicaid estimates. In order to get
an idea of how our estimate of Medicaid false-positives would impact a typical statewide
population survey counting the number of people enrolled in Medicaid, we would need to adjust
the BCBS population to resemble the distribution of coverage within the adult population of
The MinnesotaCare coverage in this BCBS sample is 2.7 times larger than is found in the
full adult population in Minnesota, and adult Medicaid/PMAP coverage in the BCBS sample is
only half what it is in the general adult population. Thus, it is misleading to directly use our
BCBS-based estimate of the effect of those who are not known to be enrolled in
Medicaid/PMAP on the Medicaid count from the survey. To obtain a rough estimate of the
impact on a general statewide adult population survey, we use administrative data on adult
enrollment in Medicaid and MinnesotaCare and combine this with the most recent estimates of
the proportion of the adult population (under age 65) with commercial coverage, with no
insurance and, finally, the total Medicare population.10 Even with these statewide population
adjustments, we would still likely find that actual Medicaid/PMAP enrollees made up only 50%
of the total Medicaid count from the survey.
Respondents known to have MinnesotaCare coverage make up the largest percentage
contribution to the overall number of self-reported Medicaid responses. Forty nine percent of
those respondents enrolled in MinnesotaCare report that they have Medicaid. Nine percent of
those known to be enrolled in commercial insurance and age 65 and older responded yes to
having Medicaid. While only 2% of commercial enrollees under age 65 responded that they
have Medicaid, the absolute contributions to self-reported Medicaid count is significant because
it is the largest group in the survey.
Where does this leave us in understanding the Medicaid “undercount” issue? There are
four likely sources of the error leading to the undercount. The first source, under-reporting of
Medicaid, is one we assessed in this analysis: 15.7% of the Medicaid/PMAP population we
surveyed did not report having Medicaid; of this 15.7%, 96% answered some other type of
coverage. A second source is survey sample coverage error which occurs when Medicaid
recipients are systematically not included in the survey sample for some reason. This can happen
in a phone survey, for example, because Medicaid respondents are more likely to be without a
telephone than the general population (Davern et al. 2004). It could be happening in surveys
with area probability sample designs—such as the CPS-ASEC or the NHIS—if people with
10 We use the statewide RDD survey conducted in conjunction with the MATS survey for these
coverage estimates (Minnesota Adult Tobacco Survey 2003).
Medicaid are systematically missed by the sample design (i.e., the sampling frame for these
surveys may be more likely to exclude Medicaid enrollees for some reason).11
The third possible explanation is that Medicaid participants are more likely to refuse to
participate in surveys than those who are not enrolled in Medicaid, and the post-stratification
adjustments that population surveys use to adjust for differential non-response (e.g., see Gelman
and Carlin 2002) may not fully capture this tendency. The final potential source of undercount
error is some sort of systematic over-reporting of Medicaid-enrolled individuals in the
enrollment files. For example, including institutionalized people in the administrative data count
who are not eligible to be in household surveys, or not completely de-duplicating enrollment
databases prior to the count. Although all four errors are likely to play a role, more research is
needed to understand what is causing the mismatch between survey counts of Medicaid enrolled
people and administrative data enrollment counts.
Our experiment found only a 0.4% upward bias in the uninsurance estimate for people
over 18 years of age enrolled in BCBS of Minnesota due to insured people answering the survey
items as if they are uninsured. Given this finding, the severe criticism that has been leveled
against survey estimates of the uninsured (that use conventional health insurance survey
measurement) by Hunter (2004) and the Joint Economic Committee (2004) is overblown.
11 Most surveys have trouble enumerating low-income populations such as Medicaid recipients. Surveys
that have the resources to conduct thorough sample coverage evaluations such as the CPS-ASEC, show
that the there are sample coverage problems (U.S. Census Bureau 2003). The policy implications of the
coverage problems have also been examined for the decennial census (PriceWaterhouseCoopers 2001).
Medicaid coverage is not explicitly examined but it is correlated with many of the documented coverage
problems and could contribute to the Medicaid undercount.
Survey instruments employing a conventional point-in-time measure of health insurance
coverage and that use a “check all that apply” approach to health insurance coverage
measurement do a good job of gauging whether an insured person has health insurance coverage.
However, the large number of people enrolled in public programs that report having the wrong
type of public coverage raises the issue of whether policy-useful survey estimates of specific
public programs such as SCHIP are possible. Follow-up analysis should be conducted on
whether the federal surveys employing the conventional health insurance survey module
demonstrate the same pattern.
In closing we note that the existence of a Medicaid undercount in surveys does not mean
that there is a large direct bias in survey estimates of the uninsured. The number of people
reporting no health insurance coverage when they are known to have it is rather small in our
study and in other studies examining point-in-time survey instruments (Blumberg and Cynamon
1999; Call et al. 2002). Many more people with Medicaid and MinnesotaCare (a SCHIP like
program in Minnesota) insurance coverage answer that they have a different kind of insurance
coverage than the one they are known to be enrolled in (Klerman et al. 2005; Call et al. 2002).
This is potentially troubling for policy researchers interested in modeling the enrollment or
potential enrollment of specific public programs (such as MinnesotaCare, SCHIP, and Medicaid)
that survey instrument designers should attempt to improve. Finally, we add a cautionary note
that just because we know that some people who have insurance coverage answer the survey as
though they are uninsured, this does not mean that survey estimates of the uninsured are biased.
To know the true impact of misreporting on bias in the uninsurance estimates we would also
need to know how many uninsured people incorrectly answer the survey as thought they are
AAPOR. 2004. The American Association for Public Opinion Research’s Standard definitions:
Final dispositions of case codes and outcome rates for surveys. 3rd edition. Lenexa, Kansas:
Blewett, Lynn A., Margaret B. Good, Kathleen T. Call, and Michael Davern. 2004. Monitoring
the uninsured: A state policy perspective. Journal of Health Politics, Policy and Law 29:107-
Blewett, Lynn A., Michael Davern. Forthcoming. “Meeting the Need for State Level Estimates
of Health Insurance Coverage: What Has Been Done and How it Can Be Improved.” Health
Blumberg, Stephen J. and Marcie L. Cynamon. 1999. Misreporting Medicaid enrollment: Results
of three studies linking telephone surveys to state administrative records, in 7th Conference on
Health Survey Research Methods. Williamsburg VA
Call, Kathleen T., Gestur Davidson, Anna S. Sommers, Roger Feldman, Paul Farseth, and Todd
Rockwood. 2002. Uncovering the missing Medicaid cases and assessing their bias for estimates
of the uninsured. Inquiry 38:396-408.
Card, David, Andrew K.G. Hildreth, and Lara D. Shore-Sheppard. 2001. The measurement of
Medicaid coverage in the SIPP: Evidence from California, 1990-1996. National Bureau of
Economic Research, Working Paper 8514. Cambridge, MA: National Bureau of Economic
Cohen, Joel. 1997. Design and methods of the Medical Expenditure Panel Survey Household
Component. Rockville (MD): Agency for Health Care Policy and Research; 1997. MEPS
Methodology Report No. 1. AHCPR Pub. No. 97-0026.
Congressional Budget Office. 2003. How Many Lack Health Insurance and for How Long?
Washington DC: Congressional Budget Office. Economic and Budget Issue Brief, May 12,
2003. Available at: http://www.cbo.gov/ftpdocs/42xx/doc4211/05-12-03-UninsuredBrief.pdf
Davern, Michael, James Lepkowski, Kathleen T. Call, Noreen Arnold, Tracy L. Johnson, Karen
Goldsteen, April Todd-Malmov, and Lynn A. Blewett. 2004. Telephone service interruption
weighting for state health insurance surveys. Inquiry 41:280–290.
Davern, Michael, Lynn A. Blewett, Boris Bershadsky, Kathleen T. Call, and Todd Rockwood.
2003. “State Variation in SCHIP Allocations: How Much Is There, What Are Its Sources, and
Can It Be Reduced.” Inquiry 40:184-197.
Division of Health Interview Statistics. 2003. National Health Interview Survey (NHIS). Public
Use Data Release. NHIS Survey Description. Hyattsville, MD: National Center for Health
Statistics, Centers for Disease Control and Prevention. U.S. Department of Health and Human
Services. December 2004. Available at:
Farley-Short, Pamela. 2001. Counting and characterizing the uninsured. University of Michigan,
Ann Arbor MI: Economic Research Initiative on the Uninsured Working Paper Series.
Available at www.umich.edu/~eriu/pdf/wp2.pdf.
Fronstin, Paul. 2000. Sources of health insurance and characteristics of the uninsured: Analysis
of the March 2000 Current Population Survey. EBRI Issue Brief Number 228. Washington DC:
the Employee Benefit Research Institute.
Gelman, Andrew and John Carlin. 2002. Poststratification and Weighting Adjustments. pp 289-
302 in Survey Nonresponse, eds. Robert Groves, Don Dillman, John Eltinge and Roderick Little.
New York: Wiley.
Giannarelli Linda, Paul Johnson, Sandi Nelson, and Meghan Williamson. 2005. “TRIM3's 2001
Baseline Simulation of Medicaid and SCHIP Eligibility and Enrollment: Methods and Results.”
DHHS, ASPE, TRIM3 Microsimulation Project Technical Paper, April 2005. Available at:
Holbrook, Allyson L., Melanie C. Green and Jon A. Krosnick. 2003. Telephone versus face-to-
face interviewing of national probability samples with long questionnaires: Comparisons of
respondent satisficing and social desirability response bias. The Public Opinion Quarterly 67:79-
Hunter, Derek. 2004. Counting the uninsured: Why Congress should look beyond the Census
figures. Webmemo #555, August 26th, 2004. Heritage Foundation. Washington DC.
Available at http://www.heritage.org/Research/HealthCare/wm555.cfm.
Joint Economic Committee. May 13, 2004. The Complex Challenge of the Uninsured.
Washington DC: Joint Economic Committee of the U.S. Congress.
Klerman Jacob A., Jeanne S. Ringel and Beth Roth. 2005. “Under-reporting of Medicaid and
Welfare in the Current Populations Survey.” Los Angeles, CA: RAND, document WR-
Lepkowski, James, Howard Schuman, Rui Wang and Richard Curtain. 2005. “Listed Numbers
and Do-Not-Call Registration in a National Telephone Survey.” Paper Presented at the American
Association for Public Opinion Research Annual Meeting, Miami Beach FL, May 2005.
Lewis, Kimball, Marilyn Elwood, and John L. Czajka. 1998. Counting the Uninsured: A Review
of the Literature. Washington DC: Urban Institute, July 1, 1998. Available at:
Minnesota Adult Tobacco Survey. 2003. “Quitting Smoking, 1999-2003: Nicotine Addiction in
Minnesota.” St. Paul, MN: Minnesota Department of Health, January 2004. Available at:
Nelson, David E., Eve Powell-Griner, Machell Town, and Mary G. Kovar. 2003. A comparison
of national estimates from the National Health Interview Survey and the Behavioral Risk Factor
Surveillance System. American Journal of Public Health 93:1335-41.
PriceWaterhouseCoopers 2001. Effect of US Census Bureau Undercount on Federal Funding to
States and Selected Counties 2002-2012. Report Prepared for the US Census Bureau Monitoring
Board, Presidential Members. PriceWaterhouseCoopers. Available at:
StataCorp. 2003. Stata Statistical Software: Release 8.0 . College Station, TX: Stata Corporation.
State Health Access Data Assistance Center. 2005. “HRSA State Planning Grant Program
Grantees.” Available upon request to firstname.lastname@example.org.
U.S. Census Bureau. 2001. Survey of Income and Program Participation User’s Guide. Third
Edition. Washington, DC: U.S. Department of Commerce, 2001.
U.S. Census Bureau. 2003. Current Population Survey Technical Paper #63 Revised (TP63RV);
Chapters 15 and 16. Washington DC: U.S. Census Bureau, March 2002. Available at:
Table A-1: A Comparison of the Blue Cross and Blue Shield of Minnesota
Sample to the General Statewide Random Digit Dial Component of the 2003
Minnesota Adult Tobacco Survey
White, American Indian, Other
18 to 24
25 to 34
35 to 44
45 to 54
55 to 64
0-12, no diploma
Some graduate school
Been to health professional In last year
Fair or poor
Excellent, very good, or good
Minneapolis/Saint Paul MSA
Other MN MSA
* p<0.05, ** p<0.01, *** p<0.001
^ Source: Minnesota Adult Tobacco Survey, Statewide Random Digit Dial Component 2003
^^ Source: Minnesota Adult Tobacco Survey, BCBS Listed Frame Component 2003
Random Digit Dial
BCBS of MN
Table 1: Blue Cross and Blue Shield of Minnesota Sample Records Used,
Completed Surveys, and Total Population
590,258 Commercial (25 and over) 2,388 1,316
MinnesotaCare (25 and over) 1,953 1,115 32,380
Medicaid/PMAP (25 and over) 2,070 1,214 11,458
Medicare Supplemental 508 315 163,306
18-24 year olds 1,287 615 100,471
Total 8,206 4,575 897,873
Table 2: 2003 Minnesota Adult Tobacco Survey Health Insurance Question Series
I am going to read you a list of different types of insurance. Please tell me if you currently have any of the following.
Do you have Medicare?
Do you have a Railroad Retirement plan?
Do you have ChampUS, Tricare, Veteran's Affairs or military health care for a service connected
Do you have Medical Assistance, Medicaid, PMAP (Prepaid Medical Assistance Plan), also known as
Minnesota Health Care Programs?
Do you have General Assistance Medical Care, or GAMC?
Do you have insurance through MinnesotaCare (a state-sponsored program that offers health insurance
as a subsidized rate)?
Do you have insurance through the Minnesota Comprehensive Health Association or high risk pool
insurance (also known as MCHA)?
Do you have health insurance through your work or union?
Do you have health insurance through someone else's work or union?
If under 25, through parent's work or union?
If under 25, through school, college, or university?
Do you have health insurance bought directly by you?
Do you have health insurance bought directly by someone else?
IF RESPONDENT ANSWERED NO TO ALL OF ABOVE,
According to the information you provided, you do not have health insurance coverage. Does anyone else
pay for your bills?
IF RESPONDENT ANSWERED YES, And who is that? (PROMPT ONLY IF NECESSARY)
Repeat options from above series
Table 3: Type of Coverage Known to be Enrolled in, by Self-Reported Insurance Coverage Type
Self-Reported Coverage Type
Type of Known
Commercial (<65 years) 99.4% 0.19% 4.5% 0.53% 1.8% 0.33% 2.1% 0.36% 0.3% 0.14%
Commercial (≥ 65 years) 90.7% 1.42% 98.3% 0.60% 1.4% 0.56% 8.5% 1.35% 0.5% 0.34%
MinnesotaCare 49.9% 1.69% 6.8% 1.04%87.9% 1.27%48.8% 1.69% 0.3% 0.21%
Medicaid/PMAP 32.1% 2.43% 11.6% 1.83%28.3% 2.27%84.3% 1.88% 0.6% 0.20%
Source: 2003 Minnesota Adult Tobacco Survey, n=4,314 Respondents
Note: Respondents can check more than one type of coverage so row percentages total more than 100%.
Table 4: Respondents Who Report Known Type or Type They are
Likely to Have Only, by Type of Coverage Known to be Enrolled In
Type of Known Coverage
Known or Likely
Enrolled Type Only
Commercial (< 65 years) 87.4% 0.87%
Commercial (≥ 65 years)
Medicare and Commercial 71.7% 2.19%
MinnesotaCare 24.5% 1.35%
Medicaid/PMAP 34.5% 2.29%
Source: 2003 Minnesota Adult Tobacco Survey, n=4,314
Note: Commercial ≥65 are enrolled in commercial coverage and are likely to have Medicare
but are not known to be enrolled.
32 Download full-text
Table 5: Total Count of Known Enrollees, Total Count from Self-Report, and Percent of
Self-Report Count that are Known Enrollees by Insurance Coverage Type
Type of Coverage
Known Enrolled or
Commercial (<65 years) 602,792 623,961 96.0% 0.20%
Commercial (≥65 years)
Medicare 202,381 199,121 99.9% 0.05%
Commercial 202,381 183,874 99.8% 0.06%
Medicare and Commercial 202,381 181,765 99.9% 0.05%
MinnesotaCare 40,380 53,071 66.8% 3.08%
Medicaid/PMAP 15,316 62,290 20.7% 1.51%
Source: 2003 Minnesota Adult Tobacco Survey, n=4,314
Note: Commercial ≥65 are enrolled in commercial coverage and are likely to have Medicare but are not known to be