Validating Health Insurance Coverage Survey Estimates: A Comparison of Self-Reported Coverage and Administrative Data Records
ABSTRACT We administered a health insurance coverage survey module to a sample of 4,575 adult Blue Cross and Blue Shield of Minnesota (BCBS) members to examine if people who have health insurance coverage self-report that they are uninsured. We were also interested in whether respondents correctly classify themselves as having commercial, Medicare, MinnesotaCare, and/or Medicaid coverage (the four sample strata). The BCBS of Minnesota sample is drawn from both public and commercial health insurance coverage strata that are important to policy research involving survey data. Our findings support the validity of our health insurance module for determining whether someone who has health insurance is correctly coded as having health insurance coverage, as only 0.4 percent of the BCBS members answered the survey as though they were uninsured. However, we find problems for researchers interested in using survey responses to specific types of public coverage. For example, 21 percent of the Medicaid self-reported coverage came from known enrollees and only 67 percent of the MinnesotaCare self-reported count came from known enrollees. We conclude with a discussion of the study's implications for understanding the Medicaid “undercount” and the validity of self-reported health insurance coverage.
- [Show abstract] [Hide abstract]
ABSTRACT: We extend the nonparametric literature on partially identified probability distributions and use our analytical results to provide sharp bounds on the impact of universal health insurance on provider visits and medical expenditures. Our approach accounts for uncertainty about the reliability of self-reported insurance status as well as uncertainty created by unknown counterfactuals. We construct health insurance validation data using detailed information from the Medical Expenditure Panel Survey. Imposing relatively weak nonparametric assumptions, we estimate that under universal coverage monthly per capita provider visits and expenditures would rise by less than 8 percent and 16 percent, respectively, across the nonelderly population.Journal of Human Resources 01/2009; 44(2).
- [Show abstract] [Hide abstract]
ABSTRACT: Record check studies show that survey estimates of enrollment in government-assistance programs tend to be lower than those compiled from records used for program administration, and this undercount is especially apparent for Medicaid. Studies specific to Medicaid point to false-negative reporting about enrollees in surveys as the main explanation, however their results differ with respect to findings about the level of this response error. It is unclear how much study differences owe to genuine discrepancies in how different surveys measure Medicaid versus being artifacts of different methods for measuring the undercount. This study helps to clarify this question by comparing the results of using one set of variables, derived from the same administrative database, to separately model Medicaid misreport in two different surveys: the Current Population Survey Annual Social and Economic Supplement (CPS ASEC) and in the National Health Interview Survey (NHIS), both fielded in 2001. Results suggest that survey design has an important effect on Medicaid reporting, and most notably that differences in reference period are enough to explain differences in the probability of false-negative reporting in CPS and NHIS, controlling for sample differences. Results corroborate findings that the probability of misreport depends on particular characteristics of the enrollee, and also add evidence suggesting that many enrollee-related predictors of misreport may be quite robust to some differences in survey design. To learn more about the relationships between specific features of survey design and false negative reporting, these results imply that it would be fruitful to look closer at the effect of having private insurance (at the same time as Medicaid) and of being enrolled in other assistance programs.01/2008;
Validating Health Insurance Coverage Survey Estimates:
A Comparison Between Self-Reported Coverage and Administrative Data Records
Michael Davern, Ph.D.
Assistant Professor, University of Minnesota
Kathleen Thiede Call, Ph.D.
Associate Professor, University of Minnesota
University of Minnesota
Gestur Davidson, Ph.D.
Senior Research Associate, University of Minnesota
Timothy J. Beebe, Ph.D.
Associate Professor, Mayo Clinic College of Medicine
Lynn Blewett, Ph.D.
Associate Professor, University of Minnesota
Michael Davern, Ph.D., Assistant Professor, State Health Access Data Assistance Center,
University of Minnesota School of Public Health, 2221 University Avenue SE #345,
Minneapolis, MN 55414; phone 612-625-4835; fax 612-624-1493; e-mail email@example.com.
The authors are especially grateful to Pete Rode, Steven Foldes, Barb Schillo, Nina Alesci, Jessie
Saul and Ann Kenny for help in compiling the data. Also we would like to thank Cathi Callahan
of ARC and Linda Giannarelli of the Urban Institute for their comments on this paper. An
earlier version of this paper was presented at the annual meeting of the American Association of
Public Opinion Research Annual Meeting in Phoenix, AZ, May 2004 by Kathleen Thiede Call,
and at the American Enterprise Institute on April 8th 2005 by Michael Davern. Preparation of
this manuscript was funded by Grant no. 038846 from The Robert Wood Johnson Foundation.
Validating Health Insurance Coverage Survey Estimates:
A Comparison Between Self-Reported Coverage and Administrative Data Records
We administered a health insurance coverage survey module to a sample of 4,575 adult
Blue Cross and Blue Shield of Minnesota (BCBS) members to examine if people who have
health insurance coverage self-report that they are uninsured. We were also interested in
whether respondents correctly classify themselves as having commercial, Medicare,
MinnesotaCare, and/or Medicaid coverage (the four sample strata). The BCBS of Minnesota
sample is drawn from both public and commercial health insurance coverage strata that are
important to policy research involving survey data. Our findings support the validity of our
health insurance module for determining whether someone who has health insurance is correctly
coded as having health insurance coverage. While just 0.4% of the BCBS members answered
the survey as though they were uninsured, we find problems for researchers interested in using
specific self-reported types of coverage. For example, 49% of the people on MinnesotaCare
reported having Medicaid/PMAP coverage and 50% reported having commercial coverage. We
conclude with a discussion of the study’s implications for understanding the Medicaid
“undercount” and suggestions for altering the design of surveys of health insurance coverage in
order to improve the validity of the types of self-reported coverage.
Knowing who lacks insurance coverage is essential for health services research and
health policy analysis. The only way to enumerate this population—as there is no list of
uninsured people throughout the country—is through the use of a general population survey.
Many government-sponsored general population surveys⎯such as the Current Population
Survey’s Annual Social and Economic Supplement (CPS-ASEC), the National Health Interview
Survey (NHIS), the Medical Expenditure Panel Survey-Household Component (MEPS-HC), the
Survey of Income and Program Participation (SIPP) and state-specific surveys⎯currently collect
data on whether respondents have health insurance coverage (see Blewett et al. 2004 for a
review). Estimates from these surveys form the core of our comprehensive knowledge about the
number and characteristics of people lacking health insurance coverage. These surveys are
widely used to simulate various policy options, distribute funding to states for public health
insurance programs, and to evaluate whether specific policies have been successful in achieving
stated goals (Blewett et al. 2004; Davern et al. 2003).
Because surveys are central to our understanding of health insurance coverage, validation
of these measures is essential. In this paper we first examine whether people with known health
insurance coverage tend to correctly classify themselves as being insured. Specifically, we look
at stratified sample of Minnesota adults age 18 and over enrolled in both public and commercial
health insurance products within one specific plan. Surveys generally follow a “conventional”
measurement approach in which an exhaustive list of types of health insurance coverage are read
and respondents can say yes or no to having that type of coverage. At the end of the series a
verification question is asked to determine whether a person who said “no” to all the different
insurance types actually considers themselves uninsured.
In addition to examining whether people known to have health insurance coverage
accurately self-report having coverage, we also explore whether people are able to accurately
self-report the type of health insurance coverage they have. Health insurance coverage in the
United States is comprised of a complex array of government-sponsored health insurance
programs and commercially purchased health insurance products, which may lead to confusion
about the specific type of insurance a person has. Furthermore, people can⎯and many do⎯have
multiple types of coverage, both public and commercial. To measure this complex concept,
conventional survey questionnaires usually contain items asking whether respondents have any
of the many types of coverage, allowing the respondents to answer affirmatively to more than
There are two major challenges to the perceived validity of conventional surveys of
health insurance coverage. First, there are multiple surveys for the same geographic area that can
and often do produce different estimates of health insurance coverage (Lewis, Ellwood and
Czajka 1998). Second, population surveys are thought to “undercount” the number of people
enrolled in public health programs according to enrollment data (Blumberg and Cynamon 1999;
Call et al. 2002; Lewis, Ellwood and Czajka 1998).
Many surveys collect detailed information on health insurance coverage and much of
their data are in the public domain and easily accessible (Blewett et al. 2004). Ironically, it is the
very wealth of survey data in this area that has served to undermine their perceived validity. The
many surveys that measure health insurance coverage produce different estimates of the rates of
uninsurance. Despite many attempts to explain why survey estimates differ—Nelson et al. 2003;
Congressional Budget Office 2003; Fronstin 2000; Lewis, Elwood and Czajka 1998; and Farley-
Short 2001—this issue has not been settled. There are many potential reasons why survey
estimates can vary, but a rigorous accounting of the relative importance of them has not yet been
In addition to conflicting estimates coming from various surveys, counts of program
participation produced by surveys are consistently different than administrative data.
Specifically, surveys usually undercount the number of people enrolled in public programs (e.g.,
Medicaid, food stamps, welfare) when compared to program enrollment data (Blumberg and
Cynamon 1999; Call et al. 2002; Lewis, Ellwood and Czajka 1998). Although the undercounting
of public program participation in surveys of health insurance coverage is not important for
determining the number of individuals enrolled in Medicaid (enrollment data should be used for
this purpose), surveys provide the only estimate of those lacking insurance and the extent to
which programs are reaching their target populations. If, as it is often assumed, a significant
number of survey respondents with Medicaid coverage report that they do not have coverage,
then the survey may overestimate the rate of those who are uninsured and eligible for a program.
On the other hand, if survey respondents who are Medicaid enrollees report that they have other
types of public (e.g., Medicare) or commercial health insurance, then estimates of these coverage
types will be higher than they should be, but the overall uninsured estimate would be unaffected.
The undercount and confusion over the various survey estimates of the uninsured have
led to severe criticism of the major survey estimates of health insurance coverage. A particularly
pointed example was included in a research report released by the Heritage Foundation in August
2004 to coincide with the Census Bureau’s annual release of the CPS-ASEC estimates of the
number of uninsured:
“At the very least, the undercounting of Medicaid recipients and the undercounting of
insurance coverage…demonstrate that the Census Bureau’s figures on the uninsured do
not accurately reflect reality and may lead policymakers and the public to incorrect
impressions about the uninsured. Policymakers and policy experts have no excuse for not
owning up to this fact and should supply it as a major caveat whenever making use of the
Census data on the uninsured.” (Hunter 2004, p. 3).
Another example of this sentiment, extended to health insurance surveys in general, comes
from the recent US Congress’ Joint Economic Committee report on the uninsured:
“Methodologies for estimating the number of uninsured suffer from several shortcomings
that may lead them to overestimate the number of uninsured. Many respondents are
unsure of or forget their insurance status, which makes surveys tend to overestimate the
ranks of the uninsured. Those eligible for Medicaid, in particular, may report themselves
as uninsured… Indeed, fewer people indicate in surveys that they have Medicaid than are
accounted for by the Medicaid program.” (Joint Economic Committee 2004, p. 2)
We use a unique data set of adults enrolled in Blue Cross and Blue Shield of Minnesota
(BCBS) that allows us to answer three related research questions that speak to this broader
question of the merit of conventional survey estimates of health insurance coverage. First, at
what rate do individuals who actually have specific types of commercial and public health
insurance coverage report that they are uninsured in surveys? Second, at what rate do individuals
with specific types of coverage respond that they are insured, but report having coverage they are
not known to be enrolled in (e.g., someone know to be enrolled in PMAP/Medicaid reporting
that they have insurance coverage through an employer)?1 Although past research has examined
accuracy of reporting among public program enrollees in a similar fashion (Klerman 2005;
Eberly 2005; Call et al. 2002; Card, Hildreth and Shore-Sheppard 2001; Blumberg and Cynamon
1999) no published report has systematically examined both public and commercial enrollment
as we are able to with this data set. Finally and given this information, how can conventional
health insurance survey instruments be improved in the future?
DATA AND METHODS
The source of our data is the 2003 Minnesota Adult Tobacco Survey (MATS). This
cross-sectional survey was designed to estimate smoking prevalence rates and tobacco-related
behaviors and beliefs of BCBS health plan members 18 years of age and older. As part of this
analysis we included a health insurance module on the MATS survey that allows us to validate
the survey self-reported health insurance coverage against BCBS’s administrative records. This
module forms the core of the Coordinated State Coverage Survey (CSCS) that has been fielded
in at least 12 states over the past ten years (State Health Access Data Assistance Center 2005).
The MATS survey drew a stratified random sample from four major strata of BCBS
members: 1) people 18-64 years of age with commercial health insurance coverage (e.g.,
employer-sponsored and privately-purchased); 2) people 65 years of age and older with
commercial insurance coverage (mainly those people with privately purchased Medicare
supplemental coverage, but also including seniors with employer-sponsored coverage); 3)
MinnesotaCare enrollees, which is a state-sponsored health insurance program for low-income
1 We base the known enrollment status on the BCBS administrative data. It is possible that there are
errors in recording who has coverage and what type of coverage a person has.
adults and children who are not eligible for Medicaid;2 and 4) Prepaid Medical Assistance
Program (PMAP), which is prepaid Medicaid coverage provided by a managed care
organization. 3 Three of these strata were further broken down into 18-24 year olds versus other
adults, resulting in a total of seven sampling strata.4 Members of BCBS were excluded if they
lived outside the state of Minnesota. Institutionalized members of BCBS and “dual eligible”
Medicaid/Medicare enrollees were excluded from the MATS sample. Table 1 lists the sample
size and total population for each stratum.
--- INSERT TABLE 1 ABOUT HERE ---
The total number of adults enrolled in BCBS health insurance products and eligible to be
sampled for the survey was 897,866, or roughly 24% of the adult population in the state of
Minnesota. Survey weights were created for the respondents selected in the stratified random
sample so that the sample represents the entire BCBS population in the state. Respondents were
weighted relative to their probability of selection into the sample. The person-weight is equal to
the inverse probability of selection. This weight is adjusted through post-stratification to match
known population distributions of a given group. The post-stratifying variables are: 1) gender; 2)
2 Over 80 percent of the adult MinnesotaCare enrollees are enrolled in BCBS.
3 Medicaid Prepaid Medical Assistance Plan (PMAP) enrollees are not a random subset of adult
Medicaid enrollees in Minnesota. In most cases, developmentally and physically disabled Medicaid-
eligible persons are not required to enroll in Medicaid managed care plans – they are allowed to
remain in the fee for service sector. Those enrolled in PMAP are allowed to choose from a number of
insurance carriers, of which BCBS is just one carrier in the state. As of April 2003, Medicaid in
Minnesota had an enrollment of 446,375, of whom 257,605 were in PMAP (58%). Only 17,463 of
these cases are enrolled in BCBS. There is also a long list of the types of MA recipients who are not
required to enroll in PMAP including people who are blind/disabled, in a county not participating in
PAMP are the largest of these groups. For these reasons we carefully interpret our PMAP findings.
4 The survey was designed to obtain detailed information on the smoking habits of 18-24 year olds
making an oversample of this population appropriate.
the total number of adults in each of the sampling strata; and 3) whether the person lived in the
Minneapolis/Saint Paul metropolitan area, another Minnesota metropolitan area, or a Minnesota
non-metro area. All reported estimates are derived from the weighted sample.
The survey was administered by Clearwater Research, Inc.; interviewers used Computer
Assisted Telephone Interviewing (CATI) software, and no proxy responses were allowed. Only
BCBS enrollees with listed telephone numbers were included in the study.5 BCBS does not keep
careful track of current phone numbers and the MATS survey team were able to find listed
numbers for 65% percent of the BCBS records (through using commercial marketing databases
and the National Change of Address file). Interviews were conducted between November 2002
and June 2003 and if an individual sampled BCBS member was no longer a resident of a
household, follow up contact information was requested.
Of the 4,575 completed cases, 235 were later found to no longer be enrolled in BCBS on
the day of the interview. These observations were excluded from the analysis because we could
not validate their self-reported insurance status. In addition, 18 cases were removed from the
analysis because the respondent did not affirmatively answer “yes” to any type of health
coverage but answered “don’t know/not sure” to one or more types of coverage; and eight cases
5 Lepkowski et al. (2005) found very little bias between listed and unlisted telephone numbers in a
large national random digit dial telephone survey in the variables they studies although they were not
specifically interested in health insurance.
were removed because they were under 65 and enrolled in senior supplemental insurance. Our
final analysis sample size was therefore 4,314, representing 860,870 BCBS members.6
The overall response rate calculated using the American Association for Public Opinion
Research’s (AAPOR) (RR4) of the survey was 61.5% and the respondent weights were post-
stratified to equal total enrollees within each of the BCBS strata by region of the state, age and
gender.7 All analysis was done using StataSE 8.0 software to correct for the complex survey
design (StataCorp 2003).
We have two sources of insurance status information for each respondent. The first is the
BCBS health plan administrative data regarding insurance type. For analytical purposes we
break the sampling strata into four analytically useful types of coverage: commercial coverage
for those under 64 years of age; commercial coverage for those 65 years of age and older
(including all the senior supplemental enrollees plus those enrollees over 65 with employer-
sponsored coverage); MinnesotaCare coverage; and Medicaid/PMAP coverage. 8
The second source of coverage data is the respondents’ self-reported insurance status
from the survey questions. The survey questions begin, “I am going to read you a list of
different types of insurance….” As with many health insurance surveys, the interviewer
6 Appendix A contains a table comparing the demographics of the BCBS sample of adults we use in
this analysis to the statewide Random Digit Dial (RDD) survey of adults that was conducted in
parallel as part of the entire 2003 Minnesota Adult Tobacco Survey. The RDD sample size was 5,525
and the AAPOR response rate (RR4) for the RDD component was 51.9%. Compared to the RDD
sample the BCBS sample are more likely to be: white, older, live outside of Minneapolis/Saint Paul
MSA, slightly less educated, slightly lower income, report being insured, and non-smokers. For more
information about the RDD survey see Minnesota Adult Tobacco Survey (2003).
7 Using AAPOR response rate (RR4), the response rate was 61% among commercially insured, 66%
among Minnesota Care enrollees, 58% among Medicaid enrollees, and 74% among senior
supplemental enrollees (AAPOR 2004).
8 BCBS data managers report it is highly unlikely that a sample person’s coverage type would be
innaccurately classified given differences in revenue by sector; however, we have not conducted
an independent evaluation of the classification system.
continues by reading an exhaustive list of different types of insurance (i.e., Medicare, Railroad
Retirement Plan, Medicaid/PMAP, employer sponsored insurance, etc.). The respondent answers
“yes,” “no,” or “don’t know/not sure” to each type of insurance (See Table 2 for the exact
question wording). After the list is read through completely, if the person does not report having
coverage an uninsurance verification item is asked. Answering “yes” to more than one type of
insurance is allowed. If the respondent answers “yes” to having at least one type of health
insurance coverage, then any “don’t know/not sure” and refusals in the series were treated as
--- INSERT TABLE 2 ABOUT HERE ---
Table 3 provides the distribution of self-reported coverage types by the known type of
BCBS health insurance plan a person was enrolled in. Only 0.3% of the commercially insured
(<65), 0.3% of those on MinnesotaCare, 0.5% of the commercially insured (≥ 65), and 0.6% of
those on PMAP/Medicaid self-reported being uninsured. Thus, relying on these self-reports
exclusively, over the entire sample we would count 0.4% of this known insured population as
being uninsured. There are no statistically significant differences between the type of
enrollment and the likelihood of self-reporting no insurance coverage.
--- INSERT TABLE 3 ABOUT HERE ---
Further, focusing on the types of coverage self-reported, only a small percentage of
individuals under age 65 known to have commercial coverage report having public coverage
(Medicare (4.5%), MinnesotaCare (1.8%), or Medicaid/PMAP (2.1%)). However, their relative
impact on the number of people estimated to be in public coverage using the survey responses is
quite large because the commercial enrollees under age 65 are the largest population of people
included in the survey. In contrast, 49.9% of individuals known to have MinnesotaCare self-
report having commercial coverage, 6.8% report Medicare, and 48.8% self-report having
Medicaid. In spite of the large percent of those on MinnesotaCare reporting other types of
coverage, the relative impact on the number estimated to have those types of coverage is minimal
except for the number estimated to have Medicaid.
Fully 99.4% of respondents under age 65 known to be enrolled in commercial health
insurance reported being enrolled in commercial health insurance coverage, compared to 87.9%
of those enrolled in MinnesotaCare and 84.3% of those enrolled in PMAP/Medicaid. Among
those people age 65 or older and known to have commercial coverage, 90.7% report having
commercial coverage and 98.3% report having Medicare (which they likely do have in most
Table 4 shows the percent of people who report either the type of coverage they are
known to be enrolled in and/or the type of coverage they are likely to be enrolled in (i.e. the
BCBS commercial insurance enrollees age 65 and older who are presumed to be enrolled in
Medicare). For respondents under age 65 with commercial insurance, 87.4% of these survey
respondents report exclusively having commercial insurance. Among those age 65 and older
with commercial insurance, 71.7% report exclusively having both Medicare and commercial
insurance. Finally, for Medicaid/PMAP enrollees 34.5% report exclusively having
Medicaid/PMAP coverage, and 24.5% of MinnesotaCare enrollees report exclusively having
---INSERT TABLE 4 ABOUT HERE ---
Column 2 of Table 5 provides the weighted count of those people known to be enrolled in
commercial insurance under age 65, commercial insurance age 65 and older (who are likely to be
enrolled in Medicare), MinnesotaCare, and Medicaid/PMAP. Column 3 provides the weighted
count of respondents who self-reported having that type of coverage regardless of their known
coverage type. That is, this column provides the count of coverage types that would be
generated from a general population survey estimate. The fourth column gives the percent of
respondents self-reporting that type of insurance who are known to be enrolled. As seen, 96.0%
of the total self-reported count of commercial BCBS enrollees under 65 is made up of people
known to be enrolled. For those commercial enrollees 65 years of age and older, 99.9% of the
total count of those self-reporting Medicare and commercial coverage is made up of those known
to be enrolled.
On the other hand, only 20.7% of those self-reporting Medicaid/PMAP were known to be
enrolled and 66.8% of those self-reporting MinnesotaCare were known to be enrolled. In both
cases clearly a considerable percentage of the self-reported count consists of people who are not
known to be enrolled in Medicaid/PMAP or MinnesotaCare, respectively.
--- INSERT TABLE 5 ABOUT HERE ---
Our experiment conducted on Minnesota adult enrollees in BCBS indicates that the
estimates of uninsurance are minimally biased upward as a result of counting people who have
health insurance coverage as being uninsured. Specifically, only 0.4% of our adult respondents
with coverage self-reported they are uninsured. In addition, we find that respondents do well
reporting their “correct” insurance coverage type, although the degree of accuracy varies by the
type of insurance coverage a person is enrolled in. The vast majority of respondents answered
affirmatively to having the type of coverage they were known to be enrolled in, but the percent
exclusively answering that “correct” coverage ranged from 99% of commercial enrollees under
65 to 84% of Medicaid/PMAP enrollees. Our analysis also exposes a large—and unexpected—
impact of respondents who are not known to be enrolled in Medicaid/PMAP and MinnesotaCare
for the self-reported counts of the number of people enrolled in these two programs. This
finding has implications for the “Medicaid undercount” literature and how to think about
different survey estimates and Medicaid enrollment. Finally, our results suggest the need for a
discussion of the design of health insurance survey items. The conventional health insurance
module used in this survey appears effective at measuring uninsurance and enrollment in
commercial coverage. But there are clearly concerns for measuring enrollment in Medicaid or
MinnesotaCare, an SCHIP-like state program, and we discuss our results below within the
context of relevant literature and debates in the field.
Effect on Estimates of the Uninsured
We begin by comparing our findings to the results of other experimental studies that have
examined bias in uninsurance estimates due to people with coverage incorrectly reporting they
are uninsured (Klerman 2005; Eberly 2005; Blumberg and Cynamon 1999; Call et al. 2002).
Blumberg and Cynamon9 found that 4.5% of the parent proxies for a sample of Minnesota
children on Medicaid falsely report no insurance coverage. The Call et al. (2002) study, which
analyzed Minnesota Medicaid enrollees of all ages, found that 4.1% of Medicaid enrollees
reported not having any insurance coverage. The Klerman et al. (2005) study found 21.7% of
those cases they were able to match between the CPS-ASEC and Medi-CAL enrollment data
incorrectly answered the survey as though they were uninsured. Finally, a study by Eberly
(2005) et al. in Maryland found that 4.5% of the Medicaid enrollees they contacted over the
telephone answered the survey as though they were uninsured.
This large range from the seemingly negligible 0.4% in the current study to the high
21.7% in the study by Klerman et al. (2005) may be due to different methodological approaches.
There are good reasons to believe the Klerman et al. (2005) result to be an outlier, with 21.7% of
the adults 15-64 in enrolled in Medi-CAL answering the CPS-ASEC survey as though they were
uninsured. Specifically, the CPS-ASEC produces the highest rate of all-year uninsured people,
and in fact the CPS-ASEC all-year uninsured estimate is higher than some other national survey
estimates of the number of people uninsured at a specific point-in-time (Congressional Budget
Office 2003). The CPS-ASEC asks whether someone had Medicaid (or other types of insurance
coverage) at any time during the last calendar year, with the interviews conducted in March. As
shown by the Klerman et al. (2005) analysis, there is a significant impact of the length of the
reference period for the survey question on the answers provided in the CPS-ASEC. Those
enrolled in Medicaid only in the first few months of the reference period and not again during the
rest of the year were the least likely to report having Medicaid. Thus the CPS-ASEC high
9 The results of three studies are reported by Blumberg and Cynamon. Only the results from the first
study conducted in Minnesota are included here. The uninsurance estimates from the second and third
studies are omitted from this comparison as they are subject to considerable uncertainty as the authors