Response Behavior in an Adaptive Survey Design for the Setting-Up Stage of a Probability-Based Access Panel in Germany

To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Selection criteria were 18+ and entitled to vote in Germany. The response (completed interviews) was 13.6%; partial interviews added an additional 0.9% (Engel, 2015). The survey attitude scale was part of the recruitment interview. ...
Full-text available
Declining response rates worldwide have stimulated interest in understanding what may be influencing this decline and how it varies across countries and survey populations. In this paper, we describe the development and validation of a short 9-item survey attitude scale that measures three important constructs, thought by many scholars to be related to decisions to participate in surveys, that is, survey enjoyment, survey value, and survey burden. The survey attitude scale is based on a literature review of earlier work by multiple authors. Our overarching goal with this study is to develop and validate a concise and effective measure of how individuals feel about responding to surveys that can be implemented in surveys and panels to understand the willingness to participate in surveys and improve survey effectiveness. The research questions relate to factor structure, measurement equivalence, reliability, and predictive validity of the survey attitude scale. The data came from three probability-based panels: the German GESIS and PPSM panels and the Dutch LISS panel. The survey attitude scale proved to have a replicable three-dimensional factor structure (survey enjoyment, survey value, and survey burden). Partial scalar measurement equivalence was established across three panels that employed two languages (German and Dutch) and three measurement modes (web, telephone, and paper mail). For all three dimensions of the survey attitude scale, the reliability of the corresponding subscales (enjoyment, value, and burden) was satisfactory. Furthermore, the scales correlated with survey response in the expected directions, indicating predictive validity.
The chapter covers a survey experiment on two kinds of response effects. Response order is analysed against the background of the prominent primacy/recency hypothesis in survey methodology. Since this hypothesis refers to unordered scales, a modified version is suggested for the case of ordinal scales. The use of four vs. five response categories represents a second experimental factor. The probit regression analysis confirms both the “modified primacy-effect hypothesis” and the “missing-equivalence hypothesis” on the use of four vs. five response categories. The experiment is embedded in an adaptive survey design. The data come from the “Bremen City-of-Science Survey” which was conducted in mixed – web and telephone – mode in the spring of 2016. For this survey, a probability sample of residents aged 18 + was drawn from the population register of the city of Bremen.
In Deutschland wird in der Sozial- und Marktforschung nach wie vor am häufigsten per Telefon oder Internet befragt (ADM 2014). Die wissenschaftliche Umfrageforschung sieht sich allerdings, vor allem im Rahmen telefonischer Befragungen (z.B. qua Randomized Last Digit/Random Digit Dialing), immer häufiger mit dem Problem niedriger Antwortraten konfrontiert (Aust und Schröder 2009; Häder et al. 2009; Kreuter 2013). Das auf Gabler und Häder (2002) zurückgehende Verfahren wurde seinerzeit entwickelt, um dem Trend einer abnehmenden Telefonbuch-Eintragsdichte entgegenzuwirken.
ResearchGate has not been able to resolve any references for this publication.