To read the file of this research, you can request a copy directly from the authors.
Abstract
Manche Menschen nutzen das Internet nicht oder nur eingeschränkt. Insofern sich diese Menschen, die das Internet nicht nutzen, von Internet-Nutzenden unterscheiden, kann dies ein Problem für probabilistische Onlinepanels darstellen, die Rückschlüsse auf die Allgemeinbevölkerung ziehen möchten. Im vorliegenden Beitrag werden daher zwei Strategien zur Inklusion von Menschen ohne Internet vorgestellt, vor dem Hintergrund der existierenden Literatur kritisch diskutiert, und um Erkenntnisse aus dem German Internet Panel und dem GESIS Panel ergänzt.
The outbreak of the COVID-19 pandemic has a massive impact on society. To curb the spread of the SARS-CoV-2 virus, unprecedented containment measures are being taken by governments around the world. These measures and the fear of the disease itself are likely affecting the economy, social inequality, mental and physical health, and even people's perception of good democratic governance. Equally unprecedented is the speed at which these massive changes take place and the lack of statistical evidence that accompanies them. Within days of the first containment measures in Germany, the German Internet Panel (GIP) launched the Mannheim Corona Study (MCS), a daily rotating panel study of the general adult population of approximately 3,600 respondents. Its data and reports now inform the crisis cabinet of the German government and are the basis for groundbreaking social and economic research. This paper gives insights into the MCS methodology and data quality.
There is an ongoing debate in the survey research literature about whether and when probability and nonprobability sample surveys produce accurate estimates of a larger population. Statistical theory provides a justification for confidence in probability sampling as a function of the survey design, whereas inferences based on nonprobability sampling are entirely dependent on models for validity. This article reviews the current debate about probability and nonprobability sample surveys. We describe the conditions under which nonprobability sample surveys may provide accurate results in theory and discuss empirical evidence on which types of samples produce the highest accuracy in practice. From these theoretical and empirical considerations, we derive best-practice recommendations and outline paths for future research.
FREE PDF: http://rdcu.be/Fgy4
Abstract:
Access to the Internet is becoming increasingly important for all generations. However, a digital gap in Internet use remains between younger and older individuals as well as within the elderly population itself. This study, therefore, aimed to investigate Internet use among elderly Europeans. Representative data across 17 countries from the Survey of Health, Ageing and Retirement in Europe (SHARE) were examined. Analyses were based on the responses of 61,202 Europeans aged ≥ 50. Results highlight that, on average, 49% of all respondents use the Internet. However, the situation varies widely among European countries. Alongside individual indicators, such as age, gender, and social class, results indicate that previous experience with computers during one’s time in the workplace is positively associated with Internet use in old age. Furthermore, use of the Internet among an individual’s social network positively influences their use. Wider contextual structures such as area of residence and country-specific wealth and communication technology infrastructure also tend to promote Internet use among elderly Europeans. Data from SHARE indicate that private Internet use among older Europeans is driven by personal resources, prior experiences with technology, social salience as well as contextual influences.
In this introductory chapter, written by the six editors of this volume, we introduce and attempt to systematize the key concepts used when discussing online panels. The connection between Internet penetration and the evolution of panels is discussed as are the different types of online panels, their composition, and how they are built. Most online panels do not use probability-based methods, but some do and the differences are discussed. The chapter also describes in some detail the process of joining a panel, answering initial profiling questions, and becoming an active panel member. We discuss the most common sampling techniques, highlighting their strengths and limitations, and touch on techniques to increase representativeness when using a non-probability panel. The variety of incentive methods in current use also is described. Panel maintenance is another key issue, since attrition often is substantial and a panel must be constantly refreshed. Online panels can be used to support a wide range of study designs, some cross-sectional or and others longitudinal, where the same sample members are surveyed multiple times on the same topic. We also discuss industry standards and professional association guidelines for conducting research using online panels. The chapter concludes with a look to the future of online panels and more generally online sampling via means other than classic panels.
The Internet is considered an attractive option for survey data collection. However, some people do not have access to it. One way to address this coverage problem for general population surveys is to draw a probabilistic sample and provide Internet access to the selected units who do not have it and accept to participate. This is what the knowledge panel and the Longitudinal Internet Studies for the Social sciences (LISS) panel do. However, a selection effect is still possible. Units without previous Internet access might refuse to participate in a web panel, even if provided with the necessary equipment. Thus, efforts to provide the necessary equipment may not be worth it. This article investigates the gain in terms of representativeness of offering the equipment to non-Internet units in a web panel using tablets: the French Longitudinal Internet Studies for the Social Sciences panel. We find that the number of non-Internet units who accept to participate is low. This is not only due to the fact that their response rates are lower but also to the small proportion of non-Internet units in the French population. In addition, they participate less in given surveys once they become panelists. At the same time, they are very different from the Internet units. Therefore, even if because of the small number of units, the overall gain in representativeness is small, there are a few important variables (e.g., education) on which their inclusion yields a more representative sample of the general population.
Inferential statistics teach us that we need a random probability sample to infer from a sample to the general population. In online survey research, however, volunteer access panels, in which respondents self-select themselves into the sample, dominate the landscape. Such panels are attractive due to their low costs. Nevertheless, recent years have seen increasing numbers of debates about the quality, in particular about errors in the representativeness and measurement, of such panels. In this article, we describe four probability-based online and mixed-mode panels for the general population, namely, the Longitudinal Internet Studies for the Social Sciences (LISS) Panel in the Netherlands, the German Internet Panel (GIP) and the GESIS Panel in Germany, and the Longitudinal Study by Internet for the Social Sciences (ELIPSS) Panel in France. We compare them in terms of sampling strategies, offline recruitment procedures, and panel characteristics. Our aim is to provide an overview to the scientific community of the availability of such data sources to demonstrate the potential strategies for recruiting and maintaining probability-based online panels to practitioners and to direct analysts of the comparative data collected across these panels to methodological differences that may affect comparative estimates.
Probability-based online panels are beginning to replace traditional survey modes for existing established surveys in Europe and the United States. In light of this, current standards for panel response rate calculations are herein reviewed. To populate these panels cost-effectively, more diverse recruitment methods, such as, mail, telephone, and recruitment modules added to existing surveys are being used, either alone or in combinations. This results in panel member cohorts from different modes complicating panel response rate calculations. Also, as a panel ages with inevitable attrition, multiple cohorts result from panel refreshment and growth strategies. Formulas are presented to illustrate how to handle multiple cohorts for panel metrics. Additionally, drawing on relevant metrics used for a panel response rate, we further demonstrate a computational tool to assist planners in building a probability-based panel. This provides a means to estimate the recruitment effort required to build a panel of a predetermined size.
In order to investigate the advantage of mixed-mode (MM) surveys, selection effects between the modes should be evaluated. Selection effects refer to differences in respondent compositions on the target variables between the modes. However, estimation of selection effects is not an easy task because they may be completely confounded with measurement effects between the modes (differences in measurement error). Publications concerning the estimation of these mode effects are scarce. This article presents and compares three methods that allow measurement effects and selection effects to be evaluated separately. The first method starts from existing publications that avoid the confounding problem by introducing a set of mode-insensitive variables into the analysis model. However, this article will show that this method involves unrealistic assumptions in most practical research. The second and the third methods make use of an MM sample extended by comparable single-mode data. The assumptions, advantages, and disadvantages of all three methods are discussed. Each method will further be illustrated using a set of six variables relating to opinions about surveys among the Flemish population. The results show large differences between the methods.
The paper looks into the processes and outcomes of setting-up and maintaining a probability-based longitudinal online survey, which is recruited face-to-face and representative of both the online and offline population aged 16 to 75 in Germany. This German Internet Panel (GIP) studies political and economic attitudes and reform preferences through bi-monthly longitudinal online interviews of individuals. The results presented demonstrate that a carefully designed and implemented online panel can produce high-quality data at lower marginal costs than existing panels that operate solely in face-to-face mode. Analyses into the representativeness of the online sample showed no major coverage or nonresponse biases. Finally, including offline households in the panel is important as it improves the representation of the older and female segments of the population.
We examine the dimensions of Internet use based on a representative sample of the population of the UK, making three important contributions. First, we clarify theoretical dimensions of Internet use that have been conflated in prior work. We argue that the property space of Internet use has three main dimensions: amount of use, variety of different uses, and types of use. Second, the Oxford Internet Survey 2011 data set contains a comprehensive set of 48 activities ranging from email to online banking to gambling. Using the principal components analysis, we identify 10 distinctive types of Internet activities. This is the first typology of Internet uses to be based on such a comprehensive set of activities. We use regression analyses to validate the three dimensions and to identify the characteristics of the users of each type. Each type has a distinctive and different kind of user. The Internet is an extremely diverse medium. We cannot discuss ‘Internet use’ as a general phenomenon; instead, researchers must specify what kind of use they examine.
This paper investigates whether it is possible to improve the representativeness of an Internet panel by including non-Internet households. We study the LISS panel, managed by CentERdata, an Internet panel based on a probability sample that comprises approximately 5000 households. The LISS panel provides non-Internet households, households with no Internet access at the time of the sampling, with cost-free equipment and an Internet connection. Early 2010 the LISS panel contained 545 non-Internet households, this equals approximately 10% of the entire panel. The analyses show that particularly older households, non-western immigrants, and one-person households are less likely to have Internet access. The LISS panel includes a representative sample of non-Internet households except for households with high average age ("the oldest old"). Non-Internet households who participate in the panel show higher response rates on the individual questionnaires and lower attrition rates. While significant differences between the panel and the Dutch population remain, the complete LISS panel, with both Internet and non-Internet households, appears to be closer to the Dutch population than the panel consisting only of Internet households for all socio-demographic variables we tested. Furthermore, about half of the non-Internet households start to use the Internet after they have become panel members. They use less of the options offered by Internet, and mainly use the simpler applications, such as e-mail and information search, compared to persons living in Internet households. In this sense, they remain different from the original Internet households and continue to contribute to the quality of the panel data.
Using a nationally representative British survey, this article explores the extent to which adults are using the internet for learning activities because they choose to (digital choice) or because of (involuntary) digital exclusion. Key findings suggest that reasons for (dis)engagement with the internet or the uptake of different kinds of online learning opportunities are somewhat varied for different groups, but that both digital choice and exclusion play a role. Thus, it is important for policy initiatives to better understand these groups and treat them differently. Furthermore, the more informal the learning activity, the more factors that play a significant role in explaining uptake. Policies designed to support individuals’ everyday interests, as opposed to more formal kinds of learning, are likely to be more effective in increasing people’s productive engagement with online learning opportunities.
"Das Forschungsprogramm ALLBUS (Allgemeine Bevölkerungsumfrage der Sozialwissenschaften) dient dem Ziel, Daten für die empirische Sozialforschung zu erheben und umgehend allgemein zugänglich bereitzustellen. Die Verwendung des ALLBUS in Sekundäranalysen erfordert es, jede Phase des Forschungsablaufs so transparent wie möglich zu gestalten. Damit die Nutzer des ALLBUS den Prozess der Datenerhebung nachvollziehen und sich kritisch mit den gewonnenen Daten auseinandersetzen können, werden Konzeption und Durchführung der einzelnen Studien ausführlich dokumentiert, so auch im vorliegenden Methodenbericht für den ALLBUS 2008. Im Folgenden wird zunächst die allgemeine Konzeption des ALLBUS- und des ISSP-Programms kurz vorgestellt (Abschnitt 2). In den Abschnitten 3 und 4 werden die Inhalte des ALLBUS und ISSP 2008 erläutert. Die Stichprobenziehung für den ALLBUS 2008 wird in Abschnitt 5, das Feldgeschehen in Abschnitt 6 dargestellt. Der Abgleich der Verteilungen demographischer Merkmale in der realisierten ALLBUS-Stichprobe mit den Mikrozensusergebnissen in Abschnitt 7 liefert abschließend den Nutzern wichtige Anhaltspunkte für die Beurteilung der Stichprobenqualität." (Textauszug)
A potential limitation of web-only panels of the general public, even when households are selected using probability methods, is that only about 70 percent of U.S. households have members with Internet access. In addition, some members of Internet-connected households may be unable or unwilling to participate over the web. The Gallup Panel uses both web and mail modes to survey respondents and in 2006 included approximately 50,000 households selected by random-digit dialing. Frequent Internet users were assigned to respond by the web, while others were assigned to participate by mail using a paper questionnaire with a similar visual layout to the web. We use several approaches to determine whether or not the mail option adds value to the results in an otherwise Internet panel and organize our analyses around answering a series of questions. First, does the use of mail allow different types of people to be included? Second, do mail and web respondents give different answers to the same questions? Third, does weighting on and controlling for demographics eliminate any differences in responses from mail and web respondents and indicate that mail is not needed? Finally, do differences exist when responses are collected using an independent mode? In general, the answers to these questions suggest that use of mail adds value to the panel results and improves the overall accuracy of survey results.
The coronavirus SARS-CoV-2 outbreak has stimulated numerous online surveys that are mainly based on online convenience samples or commercial online access panels where participants select themselves. The results are, nevertheless, often generalized to the general population. In our paper we investigate the potential bias that is introduced by respondents’ self-selection. The analysis is based on survey data of the “GESIS Panel Special Survey on the Coronavirus SARS-CoV-2 Outbreak in Germany”, together with background information of the GESIS Panel. Our analyses show indication of a nonignorable amount of selection bias for measures of personality traits among online survey respondents. This provides some evidence that participating in an online survey and complying with measures that can minimize the risk of being infected with the SARS-CoV-2 virus are confounded. Hence, generalizing these results to the general population bears the risk of over or underestimating the share of the population that complies with specific measures.
Das sozio-oekonomische Panel (SOEP) ist die größte multidisziplinäre Haushaltsstudie in Deutschland, bei welcher seit 1984 jährlich fast 30,000 Personen in 15,000 Haushalten persönlich befragt werden. Von Beginn an wurden dabei auch Proxy-Informationen über die in den befragten Haushalten lebenden Kinder erhoben, mit Schwerpunkten in der Kinderbetreuung und dem Schulbesuch. Im Jahr 2001 wurde zusätzlich der Jugendfragebogen eingeführt, welcher direkt von den 16–17-Jährigen beantwortet wird und jugendspezifische Themengebiete behandelt. Die Jugendbefragung wird seit 2006 um einen Test zur Erfassung der kognitiven Potenziale ergänzt. Zudem wurden seit 2003 sukzessive sieben Fragebögen für bestimmte Jahrgänge der in SOEP-Haushalten lebenden Kinder eingeführt, beginnend mit Neugeborenen, welche für den Lebensabschnitt relevante Themen vertiefen. Dabei werden die Kinder ab einem Alter von 11–12 Jahren selbst befragt. Der Beitrag beschreibt die Themengebiete im Bereich Kinder und Jugendliche, die das SOEP abdeckt, und gibt einen beispielhaften Überblick über einschlägige Untersuchungen, die mithilfe der Daten durchgeführt wurden.
This paper adds to the discussion on the value of online surveys for political science research. Mainly because of the lower costs, collecting survey data over the web has become increasingly popular in recent years, despite the higher sampling and coverage error in web-only surveys, especially online access polls. Recruiting respondents for the actual panel surveys based on a representative sample using a different mode is regarded as a solution to the sampling problem. Two approaches have been used to tackle the problem of coverage error: Providing respondents with computers (e.g. LISS, ELIPPS, GIP) and offering a different mode than online to respondents, thereby adopting a mixed mode design (e.g. paper as in GALLUP, GESIS Panel). The literature suggests that offering participation in the respondent’s preferred mode affects response rates positively but not much is known about respondents’ reasoning to choose a specific mode. We argue that it is important to understand this decision to evaluate the selection into online surveys and the consequences this has for data quality. We investigate this question by drawing on data from the GESIS Panel face-to-face recruitment interview for building a mixed mode access panel (paper and web) in Germany that gave a mode choice to internet users. Our results suggest that web literacy, age and education alone do not explain the mode choice but that affinity towards the technology related to the online mode has an independent effect. In a second step, we analyse the effect of this selection mechanism on answers to questions on typical variables used in political participation research such as media competence, political interest and civic duty in the subsequent mixed mode survey. We assess the added value of adopting a mixed-mode strategy. The results inform the evaluation of biases in unimode online surveys.
Various open probability-based panel infrastructures have been established in recent years, allowing researchers to collect high-quality survey data. In this report, we describe the processes and deliverables of setting up the GESIS Panel, the first probability-based mixed-mode panel infrastructure in Germany open for data collection to the academic research community. The reference population for the GESIS Panel is the German-speaking population aged between 18 and 70 years permanently residing in Germany. In 2013, approximately 5,000 panelists had been recruited from a random sample drawn from municipal population registers. We describe the outcomes of the sampling strategy and the multistep recruitment process, involving computer-aided personal interviews conducted at respondents’ homes. Next, we describe the outcomes of the two self-administered survey modes (online and paper-and-pencil) of the GESIS Panel used for the initial profile survey and all subsequent bimonthly data collection waves. Across all stages of setting up the GESIS Panel, we report sample composition discrepancies for key demographic variables between the GESIS Panel and established benchmark surveys. Overall, the findings highlight the usefulness of pursuing a mixed-mode strategy when building a probability-based panel infrastructure in Germany.
Many older people do not use the Internet. We investigated the attitudes of older people who do (onliners) or do not (offliners) use the Internet, to assess their views of the Internet and whether they see the Internet as a resource for coping with everyday life situations. Participants aged ≥ 65 years ( N = 1,037), living in Switzerland, were interviewed in a telephone survey. Descriptive and multivariate analyses were conducted. The data show (a) many of the respondents viewed the Internet as useful, in general, and for coping with everyday life situations; (b) onliners saw more positive aspects of the Internet than did offliners; and (c) among onliners, 53% agreed with the statement, "The Internet allows me to stay independent longer into old age." However, it appears that especially older onliners with a high affinity for technology will presumably use the Internet to cope with everyday life.
Seifert, A. & Schelling, H. R. (online first - 2016). Seniors Online: Attitudes Toward the Internet and Coping With Everyday Life. Journal of Applied Gerontology, 1-11. DOI 10.1177/0733464816669805.
Research into reasons for Internet non-use has been mostly based on one-off cohort studies and focused on single-country contexts. This article shows that motivations for being offline changed between 2005 and 2013 among non- and ex-users in two high-diffusion European countries. Analyses of Swedish and British data demonstrate that non-user populations have become more concentrated in vulnerable groups. While traditional digital divide reasons related to a lack of access and skills remain important, motivational reasons increased in importance over time. The ways in which these reasons gain importance for non- and ex-user groups vary, as do explanations for digital exclusion in the different countries. Effective interventions aimed at tackling digital exclusion need to take into consideration national contexts, changing non-user characteristics, and individual experience with the Internet. What worked a decade ago in a particular country might not work currently in a different or even the same country.
The LISS online panel has made extra efforts to recruit and retain households that were not regular users of the Internet into the study. Households were provided with computers and/or Internet when necessary. Including these cases made the panel more representative of the Dutch population, by bringing in respondents who were more likely to be older, to live in single-person homes, and to have migration backgrounds. This article replicates five published articles that used LISS data and explores how the conclusions in these articles would have been different had the LISS panel not included the non-Internet households. There are strong demographic differences between the Internet and non-Internet households, and estimates of means would in many cases be biased if these households had not been included. However, across the five replicated studies, few of the published model estimates are substantively affected by the inclusion of these households in the LISS sample.
This paper analyses the regional dimension of the German digital divide. It studies the determinants of home Internet use in Germany on the level of counties as well as on the level of individuals. Based on two large data sets, the analyses show that population density itself cannot explain regional differences in Internet use rates. The results rather indicate that it is the different composition of individual characteristics between rural and urban populations that accounts for the regional digital divide. At individual level, the findings underline the importance of network effects.
Durchschnittliche Nutzung des Internets durch Personen nach Altersgruppen
Jan 2020
Destatis
Destatis (2020). Durchschnittliche Nutzung des Internets durch Personen nach Altersgruppen.
https://www.destatis.de/DE/Themen/Gesellschaft-Umwelt/Einkommen-Konsum-
The Influence of a Person's Digital Affinity on Unit Nonresponse and Attrition in an Online Panel. Social Science Computer Review, 089443931877475. Internet World Stats
Jan 2018
J M E Herzing
A G Blom
Herzing, J. M. E., & Blom, A. G. (2018). The Influence of a Person's Digital Affinity on Unit
Nonresponse and Attrition in an Online Panel. Social Science Computer Review,
089443931877475.
Internet
World
Stats.
Internet
Usage
in
the
European
Union.
Developing the NatCen Panel
Jan 2017
Curtis Jessop
Jessop,
Curtis.
2017.
"Developing
the
NatCen
Panel".
Technical Overview of the AmeriSpeak Panel. NORC's Probability-Based Household Panel
Jan 2019
NORC (2019). Technical Overview of the AmeriSpeak Panel. NORC's Probability-Based
Household
Panel.
Growing and Improving Pew Research Center's American Trends Panel
Jan 2019
165-186
Jugendhilfe Forschungsdaten Für Die Kinder-Und
Forschungsdaten für die Kinder-und Jugendhilfe (S.165-186). Springer VS, Wiesbaden.
Pew Research Center (2019). Growing and Improving Pew Research Center's American
Trends Panel. https://www.pewresearch.org/methods/2019/02/27/growing-and-improvingpew-research-centers-american-trends-panel/. Zugegriffen: 09.06.2020.
Global digital population as of
Apr 2020
Statista
Statista
(2020).
Global
digital
population
as
of
April
2020.
The Impact of Non-Coverage in Web Surveys in a Country with High Internet Penetration: Is It (Still) Useful to Provide Equipment to Non-Internet Households in the Netherlands?
Jan 2016
33-50
V Toepoel
Y Hendriks
Toepoel, V., & Hendriks, Y. (2016). The Impact of Non-Coverage in Web Surveys in a Country
with High Internet Penetration: Is It (Still) Useful to Provide Equipment to Non-Internet
Households in the Netherlands? 11(1), 33-50.