Article

The Effect of Differential Incentives on Attrition Bias: Evidence from the PASS Wave 3 Incentive Experiment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Respondent incentives are widely used to increase response rates, but their effect on nonresponse bias has not been researched as much. To contribute to the research, we analyze an incentive experiment embedded within the third wave of the German household panel survey “Panel Labor Market and Social Security” conducted by the German Institute for Employment Research. Our question is whether attrition bias differs in two incentive plans. In particular, we want to study whether an unconditional €10 cash incentive yields less attrition bias in self-reported labor income and other sociodemographics than a conditional lottery ticket incentive. We find that unconditional cash incentives are more effective than conditional lottery tickets in reducing attrition bias in income and several sociodemographic variables.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For example, Mauz et al. (2018) did not offer incentives or test different incentive strategies, despite strong evidence that monetary rewards substantially increase survey participation (e.g. Church, 1993;Felderer et al., 2018;Pforr et al., 2015;Singer & Ye, 2013). ...
Article
Full-text available
Implementing innovations in surveys often results in uncertainty concerning how different design decisions will affect key performance indicators such as response rates, nonresponse bias, or survey costs. Thus, responsive survey designs have been developed to better cope with such situations. In the present study, we propose a responsive survey design that relies on experimentation in the earlier phases of the survey to decide between different design choices of which—prior to data collection—their impact on performance indicators is uncertain. We applied this design to the European Values Study 2017/2018 in Germany that advanced its general social survey‐type design away from the traditional face‐to‐face mode to self‐administered modes. These design changes resulted in uncertainty as to how different incentive strategies and mode choice sequences would affect response rates, nonresponse bias, and survey costs. We illustrate the application and operation of the proposed responsive survey design, as well as an efficiency issue that accompanies it. We also compare the performance of the responsive survey design to a traditional survey design that would have kept all design characteristics static during the field period.
... Our secondary goal was to examine the effects of different incentives on non-response bias. Incentives have consistently been shown to increase survey response rates, e.g., [20] but the impact on non-response bias is less clear, e.g., [21][22][23][24]. In prior work with male disability applicants, we showed that larger incentives tended to attract younger, healthier, working men compared to smaller incentives [15]. ...
Article
Full-text available
Background Non-random non-response bias in surveys requires time-consuming, complicated, post-survey analyses. Our goal was to see if modifying cover letter information would prevent non-random non-response bias altogether. Our secondary goal tested whether larger incentives would reduce non-response bias. Methods A mailed, survey of 480 male and 480 female, nationally representative, Operations Enduring Freedom, Iraqi Freedom, or New Dawn (OEF/OIF/OND) Veterans applying for Department of Veterans Affairs (VA) disability benefits for posttraumatic stress disorder (PTSD). Cover letters conveyed different information about the survey’s topics (combat, unwanted sexual attention, or lifetime and military experiences), how Veterans’ names had been selected (list of OEF/OIF/OND Veterans or list of Veterans applying for disability benefits), and what incentive Veterans would receive (20or20 or 40). The main outcome, non-response bias, measured differences between survey respondents’ and sampling frame’s characteristics on 8 administrative variables, including Veterans’ receipt of VA disability benefits and exposure to combat or military sexual trauma. Analysis was intention to treat. We used ANOVA for factorial block-design, logistic, mixed-models to assess bias and multiple imputation and expectation-maximization algorithms to assess potential missing mechanisms (missing completely at random, missing at random, or not random) of two self-reported variables: combat and military sexual assault. Results Regardless of intervention, men with any VA disability benefits, women with PTSD disability benefits, and women with combat exposure were over-represented among respondents. Interventions explained 0.0 to 31.2% of men’s variance and 0.6 to 30.5% of women’s variance in combat non-response bias and 10.2 to 43.0% of men’s variance and 0.4 to 31.9% of women’s variance in military sexual trauma non-response bias. Non-random assumptions showed that men’s self-reported combat exposure was overestimated by 19.0 to 28.8 percentage points and their self-reported military sexual assault exposure was underestimated by 14.2 to 28.4 percentage points compared to random missingness assumptions. Women’s self-reported combat exposure was overestimated by 8.6 to 10.6 percentage points and military sexual assault exposure, by 1.2 to 6.9 percentage points. Conclusions Our interventions reduced bias in some characteristics, leaving others unaffected or exacerbated. Regardless of topic, researchers are urged to present estimates that include all three assumptions of missingness.
... PBIAS, also sometimes called the relative bias [7], represents the percent change in error between respondents and the sampling frame: ...
Preprint
Full-text available
This paper describes a simple back-of-the envelope approach (with adaptable excel worksheets) to identifying likely response/nonresponse mechanisms that will lead to situations at high risk of non-ignorable bias.
... In addition, Singer and Ye (2013) outline several studies that failed to show that incentives affect data quality. Much research has shown that the use of preincentives will, if anything, reduce bias from nonresponse (Adua and Sharp 2010;Felderer et al. 2018; Robert M. Groves et al. 2006;Petrolia and Bhattacharjee 2009); however, Parsons and Manierre (2014) note a circumstance where preincentives exacerbate nonresponse bias among a random sample of college students. ...
... A concern often voiced in the context of incentives is that the same monetary amount has a higher value for individuals with less wealth (Philipson 1997;Felderer et al. 2017). If this observation is indeed true, economically disadvantaged sample members might be more inclined to provide data in general and sensitive sensor data in particular. ...
Chapter
This chapter analyzes the effects of different incentive schemes on participation rates in a study combining self‐reports and passive data collection using smartphones, as well as breaks out these effects by economic subgroups. Providing some form of incentive, whether monetary or some other kind of token of appreciation, is common for studies recruiting respondents to answer survey questions. The chapter provides a brief review of the literature on the effectiveness of incentives and the postulated mechanisms explaining these effects. It explains the study design with an emphasis on the experimental conditions. The chapter also presents an analysis of the results of the study, including overall effects of the different incentive treatments on app installation, number of initially activated data‐sharing functions, deactivation of data‐sharing functions, retention of app, and overall costs of data collection.
... Therefore, longitudinal panel studies strive to prevent nonresponse from the outset and delay panel mortality (i.e., withdrawal of participants) as long as possible. For this purpose, it is important to identify already at an early stage participants with a high nonresponse propensity in order to implement appropriate intervention strategies (e.g., providing more attractive incentives; see Felderer, Müller, Kreuter, & Winter, 2018;McGovern, Canning, & Bärnighausen, 2018). The primary objective of this paper is the introduction of a novel approach for the prediction of future participation behavior in longitudinal surveys that allow implementing countermeasures to prevent nonresponse. ...
Article
Full-text available
Increasing nonresponse rates are a pressing issue for many longitudinal panel studies. Respondents frequently either refuse participation in single survey waves (temporary dropout) or discontinue participation altogether (permanent dropout). Contemporary statistical methods that are used to elucidate predictors of survey nonresponse are typically limited to small variable sets and ignore complex interaction patterns. The innovative approach of Bayesian additive regression trees (BART) is an elegant way to overcome these limitations because it does not specify a parametric form for the relationship between the outcome and its predictors. We present a BART event history analysis that allows identifying predictors for different types of nonresponse to anticipate response rates for upcoming survey waves. We apply our novel method to data from the German National Educational Panel study including N = 4,559 students in grade 5 that observed nonresponse rates of up to 36% across five waves. A cross-validation and comparison with logistic regression models with LASSO (least absolute shrinkage and selection operator) penalization underline the advantages of the approach. Our results highlight the potential of Bayesian discrete time event modeling for the long-term projection of panel stability across multiple survey waves. Finally, potential applications of this approach for operational use in survey management are outlined.
Article
The literature on the effects of incentives in survey research is vast and covers a diversity of survey modes. The mode of probability-based online panels, however, is still young and so is research into how to best recruit sample units into the panel. This paper sheds light on the effectiveness of a specific type of incentive in this context: a monetary incentive that is paid conditionally upon panel registration within two weeks of receiving the initial postal mail invitation. We tested early bird cash incentives in a large-scale recruitment experiment for the German Internet Panel (GIP) in 2018. We find that panel response rates are significantly higher when offering early bird cash incentives and that fieldwork progresses considerably faster, leading to fewer reminders and greater cost-effectiveness. Furthermore, sample representativeness is similarly high with or without early bird incentives.
Chapter
Aufgrund der positiven Wirkung auf Teilnahmemotivation und Panelstabilität, die für monetäre Incentives stärker als für nicht-monetäre Incentives beobachtet werden kann, wurde in der sechsten Onlinebefragung der Studierendenkohorte des Nationalen Bildungspanels mit einer Umstellung der Incentivierung von einer Verlosungsteilnahme auf ein Bar-Incentive experimentiert. Während die eine Hälfte der Stichprobe weiter über eine Verlosung incentiviert wurde, wurde jeweils einem Viertel der Zielpersonen für die vollständige Befragungsteilnahme ein Bar-Incentive bzw. die Wahl zwischen einem Bar-Incentive und einer Verlosungsteilnahme angeboten. Im Ergebnis konnte in beiden Experimentalgruppen ein um zehn Prozentpunkte höherer Rücklauf als in der Kontrollgruppe erzielt werden. Im Beitrag werden der Aufbau des Experiments sowie beobachtete Auswirkungen auf den erzielten Rücklauf vorgestellt und diskutiert.
Article
Selective attrition out of longitudinal datasets is a concern for empirical researchers. This note illustrates a simple way to identify potential attrition bias in panel surveys by exploiting multiple types of simultaneous entries into a panel survey. The little-known phenomenon of natural refreshments, which adds to entries through refreshments induced by data collectors, allows for attrition bias to be disentangled from measurement errors connected to differences in participation experience (i.e. panel conditioning). A demonstrative application on subjective data from the German Socio-Economic Panel Study (SOEP) serves as an example and offers insights on health- and happiness-related attrition in panel surveys.
Article
Full-text available
Administrative records are increasingly being linked to survey records to highten the utility of the survey data. Respondent consent is usually needed to perform exact record linkage; however, not all respondents agree to this request and several studies have found significant differences between consenting and non-consenting respondents on the survey variables. To the extent that these survey variables are related to variables in the administrative data, the resulting administrative estimates can be biased due to non-consent. Estimating non-consent biases for linked administrative estimates is complicated by the fact that administrative records are typically not available for the non-consenting respondents. The present study can overcome this limitation by utilizing a unique data source, the German Panel Study "Labour Market and Social Security" (PASS), and linking the consent indicator to the administrative records (available for the entire sample). This situation permits the estimation of non-consent biases for administrative variables and avoids the need to link the survey responses. The impact of nonconsent bias can be assessed relative to other sources of bias (nonresponse, measurement) for several administrative estimates. The results show that non-consent biases are present for few estimates, but are generally small relative to other sources of bias.
Article
Full-text available
While nonresponse rates in household surveys are increasing in most industrialized nations, the increasing rates do not always produce nonresponse bias in survey estimates. The linkage between nonresponse rates and nonresponse bias arises from the presence of a covariance between response propensity and the survey variables of interest. To understand the covariance term, researchers must think about the common influences on response propensity and the survey variable. Three variables appear to be especially relevant in this regard: interest in the survey topic, reactions to the survey sponsor, and the use of incentives. A set of randomized experiments tests whether those likely to be interested in the stated survey topic participate at higher rates and whether nonresponse bias on estimates involving variables central to the survey topic is affected by this. The experiments also test whether incentives disproportionately increase the participation of those less interested in the topic. The experiments show mixed results in support of these key hypotheses.
Article
Full-text available
Fifty-nine methodological studies were designed to estimate the magnitude of nonresponse bias in statistics of interest. These studies use a variety of designs: sampling frames with rich variables, data from administrative records matched to sample case, use of screening-interview data to describe nonrespondents to main interviews, followup of nonrespondents to initial phases of field effort, and measures of behavior intentions to respond to a survey. This permits exploration of which circumstances produce a relationship between nonresponse rates and nonresponse bias and which, do not. The predictors are design features of the surveys, characteristics of the sample, and attributes of the survey statistics computed in the surveys.
Article
Full-text available
This article reports the results of a meta-analysis of 38 experimental and quasi-experimental studies that implemented some form of mail survey incentive in order to increase response rates. A total of 74 observations or cases were classified into one of four types of incentive groups: those using prepaid monetary or nonmonetary rewards included with the initial survey mailing and those using monetary or nonmonetary rewards as conditional upon the return of the survey. Results were generated using an analysis of variance approach. The overall effect size across the 74 observations was reported as low to moderate at d = .241. When compared across incentive types, only those surveys that included rewards (both monetary and nonmonetary) in the initial mailing yielded statistically significant estimates of effect size (d = .347, d = .136). The average increase in response rates over control conditions for these types of incentives was 19.1 percent and 7.9 percent, respectively. There was no evidence of any impact for those incentive types offering rewards contingent upon the return of the survey.
Chapter
Nonresponse is an important indicator of TSE, and incentives are widely used to increase response rates. This chapter discusses the theories behind incentive effects, discusses the possible forms of incentives and related effects, estimates the optimal amount of incentives, handles different modes of data collection, the relation between incentives and data quality, and ends with best practices and a view toward the future in order to help survey researchers in identifying if, how and how much incentives should be used in their surveys.
Article
This meta-analysis quantifies the dose-response relationship between monetary incentives and response rates in household surveys. It updates and augments the existing meta-analyses on incentives by analyzing the latest experimental research, focusing specifically on general-population household surveys, and includes the three major data-collection modes (mail, telephone, and in-person) under the same analytic framework. Using hierarchical regression modeling and literature from the past 21 years, the analysis finds a strong, nonlinear effect of incentives. Survey mode and incentive delivery timing (prepaid or promised) also play important roles in the effectiveness of incentives. Prepaid incentives offered in mail surveys had the largest per-dollar impact on response. Incentive timing appears to play an important role in the effectiveness of incentives offered in telephone surveys but not in-person surveys. Our model estimates a null effect of promised incentives in mail surveys; however, given the dearth of experiments testing this type of incentive, we are unable to draw firm conclusions regarding their effectiveness. Survey burden and survey year both were negatively correlated with response overall. However, neither significantly impacted the dose-response relationship. Survey sponsorship affected neither response rate nor incentive effectiveness. The development and results of the model are discussed, and dose-response estimates specific to mode and incentive timing are presented.
Article
This article is intended to supplement rather than replace earlier reviews of research on survey incentives, especially those by Singer (2002); Singer and Kulka (2002); and Cantor, O’Hare, and O’Connor (2008). It is based on a systematic review of articles appearing since 2002 in major journals, supplemented by searches of the Proceedings of the American Statistical Association’s Section on Survey Methodology for unpublished papers. The article begins by drawing on responses to open-ended questions about why people are willing to participate in a hypothetical survey. It then lays out the theoretical justification for using monetary incentives and the conditions under which they are hypothesized to be particularly effective. Finally, it summarizes research on how incentives affect response rates in cross-sectional and longitudinal studies and, to the extent information is available, how they affect response quality, nonresponse error, and cost-effectiveness. A special section on incentives in Web surveys is included.
Article
We conducted a randomized experiment on a face-to-face interview survey in order to test the effects on response rates of a prepaid nonmonetary incentive. Results showed a statistically significant increase in response rates, mostly through reduction in refusal rates, in the half sample that received the incentive (a gift-type ballpoint pen) as compared with a no incentive control group. The effect appears to be due to greater cooperation from incentive recipients at the initial visit by an interviewer. Unexpectedly, the incentive group also showed a significantly higher rate of sample ineligibility, possibly due to easier identification of vacant residences or nonexistent addresses. In addition, evidence suggests greater response completeness among responding incentive recipients early in the interview, with no evidence of increased measurement error due to the incentive.
Article
The joint and comparative effects of the use of monetary incentives and follow-up mailings were examined in a mail survey of suburban Washington, DC cable television subscribers. Four experimental groups received monetary incentives enclosed with the first mailing only (0.25, 0.50, 1.00,or1.00, or 2.00) and three follow-up mailings. These groups were compared with each other and against a control group that did not receive an incentive. The results indicated that the response rate from the first mailing increased significantly as the incentive amount increased from zero to 0.25,andfrom0.25, and from 0.25 to 1.00.Fourmailingswithoutanincentiveproducedahigherresponseratethanasinglemailingwithanincentive,butacombinationoffollowupmailingsanda1.00. Four mailings without an incentive produced a higher response rate than a single mailing with an incentive, but a combination of follow-up mailings and a 1.00 or $2.00 incentive produced a significantly higher response rate than an equivalent number of mailings without an incentive. There was some evidence of intertreatment response bias. Larger monetary incentives tended to produce: (1) a greater degree of effort expended in completing the questionnaires, as measured by the number of short answers and comments provided, and the number of words written, and (2) comments that were more favorable toward the survey sponsor.
Chapter
Introduction Theoretical Literature and Predictions for RDD Surveys Methodology Incentives and Response Rates Differential Effects of Incentives Across Population Groups Do Incentives Impact Estimates and Data Quality? Monetary Costs and Benefits Conclusion
Chapter
Incentives in the form of a gift or money are given to survey respondents in the hope that this will increase response rates and possibly also reduce non-response bias. They can also act as a means of thanking respondents for taking part and showing appreciation for the time the respondent has given to the survey. There is a considerable literature devoted to the effects of respondent incentives, though most studies are based on cross-sectional surveys. These studies show that the both the form of the incentive, gift or money, and the way in which the incentive is delivered to the respondent has a measurable impact on response rates. A monetary incentive sent to the respondent in advance of the interview has the greatest effect on increasing response, regardless of the amount of money involved. This type of unconditional incentive is thought to operate through a process of social reciprocity where the respondent perceives that they have received something unconditionally on trust so reciprocate in kind by taking part in the research. Some of the literature suggests an improvement in data quality from respondents who are given an incentive, in terms of reduced item non-response and reduced bias through encouraging certain demographic groups to participate who otherwise might refuse. It is generally felt that incentives are more appropriate the greater the burden to respondents of taking part. Longitudinal surveys certainly constitute high burden surveys, but there is little guidance on how and when incentives should be employed on longitudinal surveys. In this paper, we review the use that is made of incentives on longitudinal surveys, describing common practices and the rationale for these practices. We attempt to identify the features of longitudinal surveys that are unique and the features that they share with cross-sectional surveys in terms of motivations and opportunities for the use of incentives and possible effects of incentives. We then review experimental evidence on the effects of incentives on longitudinal surveys. Finally, we report on two experimental studies carried out in the UK. These both address a particular issue in longitudinal surveys, namely the effect of changing the way that incentives are used part-way through the survey. Each experiment addressed a different type of change. The first experiment was carried out on the British Election Panel Survey, where an incentive was introduced for the first time at wave 6. Three experimental groups were used at both waves 6 and 7, consisting of a zero incentive and two different values of unconditional incentive. The second experiment was carried out on wave 14 (2004) of the British Household Panel Survey (BHPS). BHPS respondents have always received a gift token as an incentive and since wave 6 of the study (1996) this has been offered unconditionally in advance of the interview to the majority of respondents. The wave 14 experiment was designed to assess the effect on response of increasing the level of the incentive offered from 7 to 10 for established panel members, many of whom have co-operated with the survey for thirteen years.
Article
"The paper introduces the general design features and particularities of a new largescale panel study for research on recipients of benefits for the long-term unemployed (the so called Unemployment Benefit II) in Germany that combines a sample of 6000 recipient households with an equally large sample of the general population. Particular focus is on the sampling procedure for the general population, where a commercial database was used to draw a sample stratified by status." (author's abstract, IAB-Doku) ((en))
Article
The production of data, and the functioning of the market for observations, are universal concerns to all fields of positive economics. Economists, however, have typically placed greater emphasis on systematically analyzing the consumption of data than on considering its production. In the production of data through surveys, an important input market is that of labour, in which a demander trades observations with the supplying sample members. This paper analyses optimal monopsony compensation in such data markets, the important relationship it bears to estimation using the data that are obtained, and the statistical effects of implicit public wage regulations that are present in U.S. markets for observations.
Article
To judge whether the difference between two point estimates is statistically significant, data analysts often examine the overlap between the two associated confidence intervals. We compare this technique to the standard method of testing significance under the common assumptions of consistency, asymptotic normality, and asymptotic independence of the estimates. Rejection of the null hypothesis by the method of examining overlap implies rejection by the standard method, whereas failure to reject by the method of examining overlap does not imply failure to reject by the standard method. As a consequence, the method of examining overlap is more conservative (i.e., rejects the null hypothesis less often) than the standard method when the null hypothesis is true, and it mistakenly fails to reject the null hypothesis more frequently than does the standard method when the null hypothesis is false. Although the method of examining overlap is simple and especially convenient when lists or graphs of confidence intervals have been presented, we conclude that it should not be used for formal significance testing unless the data analyst is aware of its deficiencies and unless the information needed to carry out a more appropriate procedure is unavailable.
FDZ- Methodenreport 10/2010. IAB-Haushaltspanel im Niedrigeinkommensbereich: Welle 3 (2008/2009) [IAB-household panel of those with low income: Wave 3 (2008/2009)]
  • Büngeler K.
  • Gensicke M.
  • Hartmann J.
  • Jäckle R.
  • Tschersich N.
The use of incentives to reduce nonresponse in household surveys
  • E Singer
  • R M Groves
  • D A Dillman
  • J L Eltinge
  • R J A Little
Internet, phone, mail and mixed-mode surveys: The tailored design method
  • D A Dillman
  • J D Smyth
  • L M Christian
Dillman, D. A., J. D. Smyth, and L. M. Christian. 2014. Internet, phone, mail and mixed-mode surveys: The tailored design method, 4th ed. New York: Wiley.
Arbeitsmarkt und soziale Sicherung" und administrativen Daten der Bundesagentur für Arbeit. FDZ. Methodische Aspekte zu Arbeitsmarktdaten [Selectivities in the linkage of survey and process data
  • J Beste
Beste, J. 2011. Selektivitätsprozesse bei der Verknüpfung von Befragungs-mit Prozessdaten. Record Linkage mit Daten des Panels "Arbeitsmarkt und soziale Sicherung" und administrativen Daten der Bundesagentur für Arbeit. FDZ. Methodische Aspekte zu Arbeitsmarktdaten [Selectivities in the linkage of survey and process data. Record linkage with data from the panel labour market and social security and the administrative records of the federal employment agency]. Methods report. http://doku.iab.de/fdz/reporte/ 2011/MR_09-11.pdf (accessed June 30, 2017).
Record Linkage mit Daten des Panels “Arbeitsmarkt und soziale Sicherung” und administrativen Daten der Bundesagentur für Arbeit. FDZ. Methodische Aspekte zu Arbeitsmarktdaten [Selectivities in the linkage of survey and process data
  • J Beste