Chapter

Motives for joining nonprobability online panels and their association with survey participation behavior

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Because of the declining penetration rate of landline telephones and a general decline in the willingness to participate in surveys conducted through traditional data collection modes, survey data collection using online methods–especially online panels–is becoming more popular worldwide. Although these surveys have been widely adopted among marketing researchers, critics still fear that this sampling method leads to biased results produced by a breed of “expert” volunteer survey respondents who are solely interested in monetary incentives and therefore cannot be compared with the general population. This chapter first gives an overview of the existing literature on motives for participating in surveys in general and online panels in particular and how these motives influence survey participation behavior in the panel. Then, a new study among 1,612 members of an Austrian nonprobability online panel is presented. The results show that although other reasons for becoming an online panel member, such as helping to develop better products and services or entertainment, are reported more often, respondents who said that they joined the panel because of the promised monetary incentives have a higher starting and a lower break-off rate than do those who cited intrinsic reasons. Additionally, other characteristics of online panel members like the time of panel entry (panel tenure) and how they were recruited to the online panel, seem to be at least as important as monetary motives in determining survey participation behavior in online panels.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Researchers have previously studied the demographic makeup, motivations and behaviors among participants in on-demand research platforms. Keusch et al. [9] found that participants' top motivations for joining online panels include earning extra money, curiosity about online research and wanting to contribute to better products and services. ...
Conference Paper
Full-text available
Remote, unmoderated research platforms have increased the efficiency of traditional design research approaches such as usability testing, while also allowing practitioners to collect more diverse user perspectives than afforded by lab-based methods. The self-service nature of these platforms has also increased the number of studies created by requestors without formal research training. Past research has explored the quality and validity of research findings on these platforms, but little is known about the everyday issues participants face while completing these studies. We conducted an interview-based study with 22 experienced research participants to understand what issues are most commonly encountered and how participants mitigate issues as they arise. We found that a majority of the issues surface across research platforms, requestor protocols and prototypes, and participant responses range from filing support tickets to simply quitting studies. We discuss the consequences of these issues and provide recommendations for researchers and platforms.
... Similarly, respondents received incentives, which could lead to higher survey participation (e.g., Bosnjak et al., 2005;Keusch et al., 2014) and fewer breakoffs. Respondents were also accustomed to hard reminders (not allowing them to proceed without answering); thus, soft reminders were an exception in this study. ...
Article
Full-text available
The grid question refers to a table layout for a series of survey question items (i.e., sub-questions) with the same introduction and identical response categories. Because of their complexity, concerns have already been raised about grids in web surveys on PCs, and these concerns have heightened regarding mobile devices. Some studies suggest decomposing grids into item-by-item layouts, while others argue that this is unnecessary. To address this challenge, this paper provides a comprehensive evaluation of the grid layout and four item-by-item alternatives, using 10 response quality indicators and 20 survey estimates. Results from the experimental web survey (n = 4644) suggest that item-by-item layouts (unfolding or scrolling) should be used instead of grids, not only on mobile devices but also on PCs. While the former justifies the already increasing use of item-by-item layouts on mobile devices in survey practice, the latter implies that the prevailing routine of using grids on PCs should be reconsidered.
... Participants may differ from non-participants. Among the reasons for non-participation are time constraints and low rates of willingness to participate in studies [14,15]. These factors have contributed also to a decrease in the retention of participants, mainly in longitudinal studies [16]. ...
Article
Full-text available
Non-participation can be a source of selection bias. We evaluated the effect of non-participation on food insecurity prevalence among 2942 young adults from the EPITeen cohort (Portugal), which we have followed since assembling the cohort in 2003–2004. We conducted a cross-sectional study when the cohort participants were 26 years old. To examine the effect of non-participation, we statistically imputed the missing data on food security status using multivariate imputation by chained equations based on characteristics associated with food insecurity, specifically household income perception, education and household structure from 21 or 24 years of age follow-ups. In our cohort, non-participation caused ~ 2% difference in the food insecurity prevalence: 11.0% (95% CI 9.0–13.0) for 954 participants and 12.6% (95% CI 11.1–14.1) after imputation. These estimates are close to evidence from other European countries and sustain the relevance of developing public health interventions to promote food security, especially considering the negative nutritional and health outcomes associated with food insecurity.
... For example, it has been proposed that an environmental survey may receive more responses from environmentalists than others [4]. One study revealed there are multiple reasons to join online panels; for example, two-thirds of an online panel's participants mentioned that they joined a panel to have fun [32]. It might be that there are numerous motives to join online panels or answer mail surveys, and this might at least partially explain the differences in answering certain questions. ...
Chapter
This paper compares data and results from two different survey modes: a probability sampled postal survey and a nonprobability sampled online panel. Our main research objective was to explore if there are differences between the sample methods in terms of nonresponse, item response bias, and selectivity. Both the postal survey and online panel data consist of Finns aged 18–74. Altogether, 2470 respondents were included in the probability sample gathered randomly from the population register of Finland (sample size was 8000 with a response rate of 30.9%), and 1254 respondents were from an online panel organized by a market company. We collected the data in late 2017. The findings confirmed that an online panel can improve the representativeness by including more respondents from groups that are underrepresented within the traditional probability sample. However, we found that panel respondents were more likely to leave unanswered questions perceived as sensitive, which may be a sign of a measurement bias related to intrusiveness. Moreover, the results indicated selection differences between samples related to respondents’ media interests.
... This argument refers to selection effects in the presence of conditional incentives. More broadly, various scholars have expressed concerns that providing incentives could lead to a lack of genuine interest in the survey resulting in lower response quality because such rewards represent external motivators to participate (Cole et al., 2015;Keusch et al., 2014;Tourangeau, 2007). Psychologists have studied whether the presence of extrinsic motivation, defined as doing an activity in order to attain some outcome, leads to a decrease in intrinsic motivation, defined as doing an activity for inherent satisfaction of the activity itself (Ryan & Deci, 2000). ...
Article
Full-text available
As ever more surveys are conducted, recruited respondents are more likely to already have previous survey experience. Furthermore, it has become more difficult to convince individuals to participate in surveys, and thus, incentives are increasingly used. Both previous survey experience and participation in surveys due to incentives have been discussed in terms of their links with response quality. This study aims to assess whether previous web survey experience and survey participation due to incentives are linked with three indicators of response quality: item non‐response, primacy effect and non‐differentiation. Analysing data of the probability‐based CROss‐National Online Survey panel covering Estonia, Slovenia and Great Britain, we found that previous web survey experience is not associated with item non‐response and the occurrence of a primacy effect but is associated with non‐differentiation. Participating due to the incentive is not associated with any of the three response quality indicators assessed. Hence, overall, we find little evidence that response quality is linked with either previous web survey experience or participating due to the incentive.
... Research on online access panels indicates that the promised monetary incentive is the most important motivator for people to join the panel and the strongest predictor of subsequent survey participation (Keusch et al., 2014;Sparrow, 2006). This is supported by Sparrow (2006), who showed that 52% of respondents to the ICM online panel primarily joined because "they felt it would be an enjoyable way to earn money or enter prize draws." ...
Article
Full-text available
Nonprobability online panels are commonly used in the social sciences as a fast and inexpensive way of collecting data in contrast to more expensive probability-based panels. Given their ubiquitous use in social science research, a great deal of research is being undertaken to assess the properties of nonprobability panels relative to probability ones. Much of this research focuses on selection bias, however, there is considerably less research assessing the comparability (or equivalence) of measurements collected from respondents in nonprobability and probability panels. This article contributes to addressing this research gap by testing whether measurement equivalence holds between multiple probability and nonprobability online panels in Australia and Germany. Using equivalence testing in the Confirmatory Factor Analysis framework, we assessed measurement equivalence in six multi-item scales (three in each country). We found significant measurement differences between probability and nonprobability panels and within them, even after weighting by demographic variables. These results suggest that combining or comparing multi-item scale data from different sources should be done with caution. We conclude with a discussion of the possible causes of these findings, their implications for survey research, and some guidance for data users.
... Comparative studies have investigated the decreasing response rates to different survey modes in many countries (Atrostic et al., 2001;Brick & Williams, 2013;Curtin et al., 2005;Kreuter, 2013;Rogers et al., 2004;Williams & Brick, 2018) to determine the factors that impact response rates, such as the country-specific survey climate and response propensity (e.g., Barbier et al., 2015;Beullens et al., 2018). One indicator of the acceptance of surveys in a country might be the willingness of its citizens to participate in surveys of any mode (Brüggen et al., 2011;Keusch et al., 2014;Petrova et al., 2007). Unwillingness to participate in a national survey might be fueled by data protection concerns (Gummer & Daikeler, 2018). ...
Article
Full-text available
A major challenge in web-based cross-cultural data collection is varying response rates, which can result in low data quality and non-response bias. Country-specific factors such as the political and demographic, economic, and technological factors as well as the socio-cultural environment may have an effect on the response rates to web surveys. This study evaluates web survey response rates using meta-analytical methods based on 110 experimental studies from seven countries. Three dependent variables, so-called effect sizes, are used: the web response rate, the response rate to the comparison survey mode, and the difference between the two response rates. The meta-analysis indicates that four country-specific factors (political and demographic, economic, technological, and socio-cultural) impact the magnitude of web survey response rates. Specifically, web surveys achieve high response rates in countries with high population growth, high internet coverage, and a high survey participation propensity. On the other hand, web surveys are at a disadvantage in countries with a high population age and high cell phone coverage. This study concludes that web surveys can be a reliable alternative to other survey modes due to their consistent response rates and are expected to be used more frequently in national and international settings.
... On the other hand, members of nonprobability online panels in general, and our sample in particular, could be fairly selective. Many members of such panels are enrolled with multiple vendors, thus participating in a large number of surveys (Keusch et al., 2014). Given frequent survey participation, the stimulus that one survey on a specific topic provides might not be strong enough to change someone's behaviour outside the survey. ...
Article
Surveys continue to be a popular way of collecting data in the social sciences. But, despite their popularity they have a number of limitations including the possibility of changing the behaviour of respondents. Such mere-measure or question-behaviour effects can compromise the external validity of social data. In this article, we use digital trace data collected from PCs and mobile devices to investigate the effects of surveys on news and politics consumption. Using a non-probability panel of respondents in Germany we combine the digital trace data with that from three online surveys regarding the federal election. In contrast to our expectation, the participation in the survey does not influence online news and politics media consumption. Furthermore, we find weak evidence that respondents with previous high media consumption are less likely to be influenced by doing the survey compared to those with low media consumption. ARTICLE HISTORY
... We also expect general attitudes toward surveys and research to influence the willingness to participate in passive mobile data collection, as positive attitudes toward surveys in general lead to higher web survey participation (Bosnjak, Tutten, and Wittmann 2005;Brüggen et al. 2011;Haunberger 2011;Keusch, Batinic, and Mayerhofer 2014). ...
Article
Full-text available
The rising penetration of smartphones now gives researchers the chance to collect data from smartphone users through passive mobile data collection via apps. Examples of passively collected data include geolocation, physical movements, online behavior and browser history, and app usage. However, to passively collect data from smartphones, participants need to agree to download a research app to their smartphone. This leads to concerns about nonconsent and nonparticipation. In the current study, we assess the circumstances under which smartphone users are willing to participate in passive mobile data collection. We surveyed 1,947 members of a German nonprobability online panel who own a smartphone using vignettes that described hypothetical studies where data are automatically collected by a research app on a participant’s smartphone. The vignettes varied the levels of several dimensions of the hypothetical study, and respondents were asked to rate their willingness to participate in such a study. Willingness to participate in passive mobile data collection is strongly influenced by the incentive promised for study participation but also by other study characteristics (sponsor, duration of data collection period, option to switch off the app) as well as respondent characteristics (privacy and security concerns, smartphone experience).
... Other work has considered the motivations of panel participants. Kusch et al. found that the top-reported motivations for participating in panels included helping develop better products, enjoying the act of taking surveys, earning extra money, and being curious about online research panels [10]. Teodoro et al. found that participants who use online panels (e.g. ...
Conference Paper
Full-text available
Researchers who use remote unmoderated usability testing services often rely on panels of on-demand participants. In this exploratory study, we wanted to better understand the effects of participants completing many usability studies over time, particularly how this experience may shape the content and quality of user feedback. We present a user study with 15 diverse "professional participants" and discuss some of the panel conditioning effects described by participants.
... Third, our respondents are highly experienced with Web surveys; almost 60% of them reported to have participated in 10 or more Web surveys over the last 30 days. This makes our study population highly comparable to nonprobability online panels, where members are used to receiving many survey invitations and multiple panel membership is quite common (Keusch, Batinic, & Mayerhofer, 2014). In general, experimental research on the effects of mobile device use is very much limited to nonprobability online panels-with the exception of Antoun (2015) and de Bruijne and Wijnat (2013). ...
Article
Full-text available
Due to a rising mobile device penetration, Web surveys are increasingly accessed and completed on smartphones or tablets instead of desktop computers or laptops. Mobile Web surveys are also gaining popularity as an alternative self-administered data collection mode among survey researchers. We conducted a methodological experiment among iPhone owners and compared the participation and response behavior of three groups of respondents: iPhone owners who started and completed our survey on a desktop or laptop PC, iPhone owners who self-selected to complete the survey on an iPhone, and iPhone owners who started on a PC but were requested to switch to iPhone. We found that respondents who completed the survey on a PC were more likely to be male, to have a lower educational level, and to have more experience with Web surveys than mobile Web respondents, regardless of whether they used the iPhone voluntarily or were asked to switch from a PC to an iPhone. Overall, iPhone respondents had more missing data and took longer to complete the survey than respondents who answered the questions on a PC, but they also showed less straightlining behavior. There are only minimal device differences on survey answers obtained from PCs and iPhones.
... Recent research on participation motives and behavior in online panels shows that positive attitudes toward surveys in general lead to higher Web survey participation (Bosnjak et al. 2005;Brüggen et al. 2011;Haunberger 2011;Keusch et al. 2014). Additionally, several studies find that individuals show consistency in their (non)participation behavior across Web surveys over several waves (Göritz 2008;Haunberger 2011;Petrova et al. 2007;Peytchev 2011;Svensson et al. 2012) and in multiple surveys in an online panel (Göritz 2014;Keusch 2013). ...
Article
In recent years Web surveys have emerged as the most popular mode of primary data collection in market and social research. To improve our understanding about the influence of different societal-level factors, characteristics of the sample person, and attributes of the survey design on participation in Web surveys, this paper establishes a systematic link between theoretical frameworks used to explain survey participation behavior and state-of-the-art empirical research on online data collection methods. The concepts of self-perception, cognitive dissonance, commitment and involvement, social exchange, compliance, leverage-salience, and planned behavior are discussed and their relationship with factors that have empirically proven to influence Web survey participation are analyzed using data from an expert survey. This paper will help researchers and practitioners to make informed decisions about the use of techniques increasing participation in Web surveys.
Article
In this case study, we examine a novel aspect of data collected in a typical probability and a typical nonprobability panel: mobile app data. The data were collected in Great Britain in 2018, using the Innovation Panel of the UK Household Longitudinal Study and the Lightspeed online access panel. Respondents in each panel were invited to participate in a month-long study, reporting all their daily expenditures in the app. In line with most of the research on nonprobability and probability-based panel data, our results indicate differences in the data gathered from these data sources. For example, more female, middle-aged, and highly educated people with higher digital skills and a greater interest in their finances participated in the nonprobability app study. Our findings also show that resulting differences in the app spending data are difficult to eliminate by weighting. The only data quality aspect for which we do not find evidence of differences between the nonprobability and probability-based panel is behavior in using the spending app. This finding is contrary to the argument that nonprobability online panel participants try to maximize their monetary incentive at the expense of data quality. However, this finding is in line with some of the scarce existing literature on response behavior in surveys, which is inconclusive regarding the question of whether nonprobability online panel participants answer questions less conscientiously than probability-based panel respondents. Since the two panels in our case study differ in more aspects than the sample selection procedure, more research in different contexts is necessary to establish generalizability and causality.
Article
Purpose Digital trace data provide new opportunities to study how individuals act and interact with others online. One advantage of this type of data is that it measures behavior in a less obtrusive way than surveys, potentially reducing measurement error. However, it is well documented that in observational studies, participants' awareness of being observed can change their behavior, especially when the behavior is considered sensitive. Very little is known regarding this effect in the online realm. Against this background, we studied whether people change their online behavior because digital trace data are being collected. Design/methodology/approach We analyzed data from a sample of 1,959 members of a German online panel who had consented to the collection of digital trace data about their online browsing and/or mobile app usage. To identify reactivity, we studied change over time in five types of sensitive online behavior. Findings We found that the frequency and duration with which individuals engage in sensitive behaviors online gradually increases during the first couple of days after the installation of a tracker, mainly individuals who extensively engage in sensitive behavior show this pattern of increase after installation and this change in behavior is limited to certain types of sensitive online behavior. Originality/value There is an increased interest in the use of digital trace data in the social sciences and our study is one of the first methodological contributions measuring reactivity in digital trace data measurement.
Article
Full-text available
Recent years have seen a growing number of studies investigating the accuracy of nonprobability online panels; however, response quality in nonprobability online panels has not yet received much attention. To fill this gap, we investigate response quality in a comprehensive study of seven nonprobability online panels and three probability-based online panels with identical fieldwork periods and questionnaires in Germany. Three response quality indicators typically associated with survey satisficing are assessed: straight-lining in grid questions, item nonresponse, and midpoint selection in visual design experiments. Our results show that there is significantly more straight-lining in the nonprobability online panels than in the probability-based online panels. However, contrary to our expectations, there is no generalizable difference between nonprobability online panels and probability-based online panels with respect to item nonresponse. Finally, neither respondents in nonprobability online panels nor respondents in probability-based online panels are significantly affected by the visual design of the midpoint of the answer scale.
Article
Purpose The purpose of this paper is to provide a comprehensive review of the respondents’ fraud phenomenon in online panel surveys, delineate data quality issues from surveys of broad and narrow populations, alert fellow researchers about higher incidence of respondents’ fraud in online panel surveys of narrow populations, such as logistics professionals and recommend ways to protect the quality of data received from such surveys. Design/methodology/approach This general review paper has two parts, namely, descriptive and instructional. The current state of online survey and panel data use in supply chain research is examined first through a survey method literature review. Then, a more focused understanding of the phenomenon of fraud in surveys is provided through an analysis of online panel industry literature and psychological academic literature. Common survey design and data cleaning recommendations are critically assessed for their applicability to narrow populations. A survey of warehouse professionals is used to illustrate fraud detection techniques and glean additional, supply chain specific data protection recommendations. Findings Surveys of narrow populations, such as those typically targeted by supply chain researchers, are much more prone to respondents’ fraud. To protect and clean survey data, supply chain researchers need to use many measures that are different from those commonly recommended in methodological survey literature. Research limitations/implications For the first time, the need to distinguish between narrow and broad population surveys has been stated when it comes to data quality issues. The confusion and previously reported “mixed results” from literature reviews on the subject have been explained and a clear direction for future research is suggested: the two categories should be considered separately. Practical implications Specific fraud protection advice is provided to supply chain researchers on the strategic choices and specific aspects for all phases of surveying narrow populations, namely, survey preparation, administration and data cleaning. Originality/value This paper can greatly benefit researchers in several ways. It provides a comprehensive review and analysis of respondents’ fraud in online surveys, an issue poorly understood and rarely addressed in academic research. Drawing from literature from several fields, this paper, for the first time in literature, offers a systematic set of recommendations for narrow population surveys by clearly contrasting them with general population surveys.
Article
This research investigates the effect of topic sensitivity on panelists’ motivations and data quality. An Internet survey in which topic sensitivity varied (high, low) was conducted with panelists using the Survey Participation Inventory (SPI). A two-factor structure based on intrinsic versus extrinsic motivations was used to cluster respondents. A two-way factorial MANOVA between the sensitivity conditions and clusters assessed self-report data quality, completion time, extreme response style, and response dispersion. Panelists’ motivations decreased in the high sensitivity topic condition. However, extrinsic rewards appeared to fortify intrinsic motives without seriously compromising data quality for panelists asked to respond to sensitive questions.
Article
Full-text available
Despite the fact that many online surveys today rely on online access panels, previous studies have primarily examined what makes respondents answer online surveys, rather than focusing on the recruitment process to such panels. One common method is to recruit panel members through another online survey. Through an experimental 2 3 design, we examine how the framing (altruistic vs. egoistic appeals) and the placement of recruitment questions (early, middle, or late in the survey) in such surveys influence recruitment efficiency and subsequent survey behavior. We find that altruistic appeals and middle or late placements increase the recruitment rates. Further, altruistic appeals promote higher degrees of future survey participation in the panel. Early recruitment questions also risk leading to more survey break-offs. The Author 2016. Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved.
Conference Paper
Full-text available
Article
Full-text available
In recent years, a number of studies have used the material values scale (MVS) developed by Richins and Dawson (1992) to examine materialism as a facet of consumer behavior. This research examines the MVS in light of the accumulated evidence concerning this measure. A review of published studies reportinginformation about the scale and analysis of 15 raw data sets that contain the MVS and other measures revealed that the MVS performs well in terms of reliability and empirical usefulness, but the dimensional structure proposed by Richins and Dawson is not always evident in the data. This article proposes a 15-item measure of the MVS that has better dimension properties than the original version. It also reports the development of a short version of the MVS. Scale lengths of nine, six, and three items were investigated. Results indicate that the nine-item version possesses acceptable psychometric properties when used to measure materialism at a general level. This article also describes a psychometric approach for developing shorter versions of extant multiitem measures.
Article
Full-text available
This study examines nonresponse and coverage errors separately in a probability Web panel survey by applying traditional postsurvey adjustments. This was done by using variables whose estimates were obtainable at both the survey respondent and the full survey sample levels and whose values were known for both the full survey sample and the target population. Nonresponse error measured by the differences between the estimates from the respondents and the known full sample values was not found to be large, implying that nonresponse error in this Web survey data may not be critical. However, coverage properties of the full survey sample show some problems, and traditional postsurvey adjustments were limited in alleviating the unequal coverage of the survey sample. This coverage problem was more evident for the subpopulation-level estimates.
Article
Full-text available
In the 45 years since Cattell used English trait terms to begin the formulation of his “description of personality,” a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusally comprehensive sets of trait terms. In the first 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 triat terms suggest their potential utility as Big-Five markers in future studies.
Article
Full-text available
Due to methodological problems, the quality of the outcomes of web surveys may be seriously affected. This paper addresses one of these problems: self-selection of respondents. Self-selection leads to a lack of representativity and thus to biased estimates. It is shown that the bias of estimators in self-selection surveys can be much larger than in surveys based on traditional probability samples. It is explored whether some correction techniques (adjustment weighting and use of reference surveys) can improve the quality of the outcomes. It turns out that there is no guarantee for success.
Article
Full-text available
In this paper, we investigate whether there are differences in the effect of instrument design between trained and fresh respondents. In three experiments, we varied the number of items on a screen, the choice of response categories, and the layout of a five-point rating scale. In general, effects of design carry over between trained and fresh respondents. We found little evidence that survey experience influences the question-answering process. Trained respondents seem to be more sensitive to satisficing. The shorter completion time, higher interitem correlations for multiple-item-per-screen formats, and the fact that they select the first response options more often indicate that trained respondents tend to take shortcuts in the response process and study the questions less carefully.
Article
Full-text available
Examined how mail survey response is affected by 2 independent variables—an offer of survey results to respondents and questionnaire interest. Ss were 420 undergraduate business students. Although the offer of survey results increased the number of requests for results, it had no effect on the response rate, response speed, or the item omission rate, and it increased the cost per usable questionnaire. Of the 2 types of questionnaires, the interesting one produced a greater response rate than the uninteresting one, causing a lower cost per usable questionnaire. (17 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Marie Jahoda's latent deprivation model was tested with a representative sample of the German population (N = 998). As expected, employees reported high levels of time structure, social contact, collective purpose, and activity not only in comparison to unemployed persons but also in comparison to persons who are out of the labor force (i.e., students, homemakers, retirees). Even unskilled manual workers reported more access to these “latent functions” than persons without employment. For the fifth of Jahoda's dimensions, identity/status, no significant differences between employed persons and persons who are out of the labor force could be identified. However, unemployed persons reported less status than all other groups did. Thus, Jahoda's model was clearly endorsed for four of the five latent functions of employment and partly endorsed for the fifth function. All variables in the model correlated significantly with distress, as expected. Demographic correlates of the manifest and latent functions were also analyzed: Access to the latent functions was best among young men from higher social classes who lived in an intimate relationship in a comparatively large household with children. Copyright © 2009 John Wiley & Sons, Ltd.
Article
Full-text available
Panel conditioning arises if respondents are influenced by participation in previous surveys, such that their answers differ from the answers of individuals who are interviewed for the first time. Having two panels - a trained one and a completely fresh one - created a unique opportunity for analyzing panel conditioning effects. To determine which type of question is sensitive to panel conditioning, 981 trained respondents and 2809 fresh respondents answered nine questions with different question types. The results in this paper show that panel conditioning mainly arises in knowledge questions. Answers to questions on attitudes, actual behavior, or facts were hardly sensitive to panel conditioning. The effect of panel conditioning in knowledge questions was bigger for questions where fewer respondents knew the answer and mainly associated with the number of times a respondent answered the exact same question before.
Article
Full-text available
Cross-mode surveys are on the rise. The current study compares levels of response styles across three modes of data collection: paper-and-pencil questionnaires, telephone interviews, and online questionnaires. The authors make the comparison in terms of acquiescence, disacquiescence, and extreme and midpoint response styles. To do this, they propose a new method, namely, the representative indicators response style means and covariance structure (RIRSMACS) method. This method contributes to the literature in important ways. First, it offers a simultaneous operationalization of multiple response styles. The model accounts for dependencies among response style indicators due to their reliance on common item sets. Second, it accounts for random error in the response style measures. As a consequence, random error in response style measures is not passed on to corrected measures. The method can detect and correct cross-mode response style differences in cases where measurement invariance testing and multitrait multimethod designs are inadequate. The authors demonstrate and discuss the practical and theoretical advantages of the RIRSMACS approach over traditional methods.
Article
Full-text available
Certain survey characteristics proven to affect response rates, such as a survey's length and topic, are often under limited control of the researcher. Therefore, survey researchers sometimes seek to compensate for such undesired effects on response rates by employing countermeasures such as material or nonmaterial incentives. The scarce evidence on those factors' effects in web survey contexts is far from being conclusive. This study is aimed at filling this gap by examining the effects of four factors along with selected interactions presumed to affect response rates in web surveys. Requests to complete a web-based, self-administered survey were sent to 2,152 owners of personal websites. The 2 × 2 × 2 × 2 fully crossed factorial design encompassed the experimental conditions of (a) high versus low topic salience, (b) short versus long survey, (c) lottery incentive versus no incentive, and (d) no feedback and general feedback (study results) versus personal feedback (individual profile of results). As expected, highly salient and shorter surveys yielded considerably higher unit-response rates. Moreover, partial support was found for interaction hypotheses derived from the leverage-salience theory of survey participation. Offering personalized feedback compensated for the negative effects of low topic salience on response rates. Also, the lottery incentive tended to evoke more responses only if the survey was short (versus long), but this interaction effect was only marginally significant. The results stress the usefulness of a multifactorial approach encompassing interaction effects to understand participation differences in web surveys.
Article
Full-text available
Recent years have seen an impressive increase in web-based research, of which we review and discuss two main types. First, researchers can create online versions of traditional questionnaires. Using the internet in this way usually does not compromise the psychometric properties of such measures, and participants are typically not less representative of the general population than those of traditional studies. Technical guidelines are provided to set up such studies, and thorny issues such as participants' anonymity are discussed. We will also discuss issues regarding the assessment of minors and the repeated assessment of participants to assess developmental changes via the web. Second, the internet has changed the way people interact with each other. The study of the psychosocial consequences of this development is called cyberpsychology. We review emerging findings from this young discipline, with a focus on developmentally-relevant implications such as the use of the internet by adolescents to disc
Article
Full-text available
Purpose To provide a thorough analysis of the role of the internet in survey research and to discuss the implications of online surveys becoming such a major force in research. Design/methodology/approach The paper is divided into four major sections: an analysis of the strengths and potential weaknesses of online surveys; a comparison of online surveys with other survey formats; a discussion on the best uses for online surveys and how their potential weaknesses may be moderated; and an overview of the online survey services being offered by the world's largest research firms. Findings If conducted properly, online surveys have significant advantages over other formats. However, it is imperative that the potential weaknesses of online surveys be mitigated and that online surveys only be used when appropriate. Outsourcing of online survey functions is growing in popularity. Practical implications The paper provides a very useful source of information and impartial advice for any professional who is considering the use of online surveys. Originality/value The paper synthesizes the vast literature related to online surveys, presents original material related to survey methodology, and offers a number of recommendations.
Article
A comprehensive literature review of techniques used to increase response rates to questionnaires was conducted. Conclusions were based on arithmetic combination of 497 response rates found in 93 journal articles. Response rates were found to be increased by personal and telephone (versus mail) surveys, either prepaid or promised incentives, nonmonetary premiums and rewards, and increasing amounts of monetary reward. Other facilitators that increased responding were preliminary notification, foot-in-the-door techniques, personalization, and followup letters. Several facilitators were found to be ineffective. An estimated magnitude of effect was calculated for each design variation.
Article
This article studies population profiling to create a comprehensive attitudinal and personality profile of actual nonrespondents to a common organizational survey used in higher education institutions. Population profiling represents a nearly ideal way of studying nonresponse. Practically, it could only be implemented in limited and unique circumstances, a controlled field experiment involving some deception. The approach involves creating an archival database on an organizational stakeholder group that contains attitude and personality information along with personal identifiers. The database further contains information on these individuals' intentions to participate in upcoming survey work. Because the archival database contains identifiers, future surveys can be administered with code numbers linking back to the identifiers. Therefore, the organizational researcher can determine who does not return the survey by tabulating the code numbers. Respondents and nonrespondents to these subsequent surveys can then be compared on the comprehensive information contained in the archival database. Importantly, because the archival database contains information pertaining to individuals' intentions to participate in the survey work that was actually conducted, classes of nonrespondents can be studied.
Article
This paper describes a German study which compared eight ways of recruiting members for an online access panel. Two thousand respondents, divided into four groups of 500, were invited to sign up with the panel via email, fax, flier or letter. Half of each sample's invitations offered a cash lottery, into which new panellists would be entered, whereas the other half of the invitations did not offer a lottery. Overall, email was the most successful means of solicitation, followed by flier and fax, which were equally efficient. Very few panellists were recruited via letter. The lottery was effective only with fliers. The composition of the recruited samples differed according to solicitation method. Fax-recruited individuals were older than those recruited by flier and email. Panellists recruited via email had been using the internet longer than flier- and fax-recruited panellists and they used the internet more often than those recruited via fax. After their recruitment, panellists were followed up in the first two studies run in the panel. The probability of their taking part in these studies and of completing these questionnaires was independent of the method by which they had been recruited.
Article
Opinion research is frequently carried out through the Internet and a further increase can be expected. The article focuses on the online access panel, in which respondents are previously recruited through non-probability methods. Despite substantial time- and cost-reduction, on- line access panel research mainly has to cope with limited Internet coverage and self-selection in the recruitment phase of new panel members. The article investigates whether frequently ap- plied weighting procedures, based on poststratification variables and propensity scores, make online access panel data more representative of the general population. To address this issue, the answers to identical questions are compared between an online self-administered survey of previously recruited online access panel respondents and a face-to-face survey of randomly sampled respondents of the general population. Both respondent groups were surveyed at a similar moment in time (2006-2007) in the same geographical region (Flanders, Belgium). The findings reveal many significant differences, regarding sociodemographic characteristics as well as attitudes towards work, politics and immigrants. The results can be explained by both the specific characteristics of the respondent groups and mode effects. Weighting adjustment had only a minor impact on the results and did not eliminate the differences.
Article
The Anonymous Elect is the book that restores market research to its original condition and bestows it its full interdisciplinary rights. It asks questions that address market researchers and sociologists as well as psychologists, linguists and specialists in marketing and communication: Is there a language of online panel communication? What does this language say about the relationship between the online researcher and the online respondent? To what extent has the online medium increased the self-awareness of today's respondents to research studies? A memorable experiment in writing, Andrei Postoaca's exploration of online access panels is a book about interviewing and being interviewed, addressing and being addressed. By shifting the two voices involved in the online panel communication, the author approaches market research not only by way of facts, figures and plain statistical evidence but also by way of interpretation of the rhetoric of the online surveying act.
Article
The majority of online research is now conducted via discontinuous online access panels, which promise high response rates, sampling control, access to populations that are hard to reach, and detailed information about respondents. To sustain a critical mass of respondents, overcome panel attrition and recruit new panel members, marketers must understand how they can predict and explain what motivates people to participate repeatedly in online surveys. Using the newly developed survey participation inventory (SPI) measure, we identify three clusters of participants, characterised as voicing assistants, reward seekers and intrinsics. Our results suggest that most online surveys are filled out by intrinsically motivated respondents that show higher participation rates, response effort and performance; incentives do not offer an important response motive.
Article
This study examines four theoretical frameworks for explaining survey response behavior and their role in survey research. The results of a survey of 282 research practitioners in Asia-Pacific, North America, and Western Europe show that research practitioners in general are aware and do make use of the theories of cognitive dissonance, commitment and involvement, social exchange, and self-perception. Although the literature indicates that commitment and involvement have been used very little to explain methodological effects, the present study provides evidence to the contrary. A comparison of the results obtained from the three sample groups reveals some significant differences in the research practitioners' perceptions of why people participate in surveys as well as in the survey design strategies they adopt. There also is evidence that survey design practices are associated with, and perhaps influenced by, the research practitioners' beliefs about why people participate in surveys.
Article
While a low survey response rate may indicate that the risk of nonresponse error is high, we know little about when nonresponse causes such error and when nonresponse is ignorable. Leverage-salience theory of survey participation suggests that when the survey topic is a factor in the decision to participate, noncooperation will cause nonresponse error. We test three hypotheses derived from the theory: (1) those faced with a survey request on a topic of interest to them cooperate at higher rates than do those less interested in the topic; (2) this tendency for the "interested" to cooperate more readily is diminished when monetary incentives are offered; and (3) the impact of interest on cooperation has nonignorability implications for key statistics. The data come from a three-factor experiment examining the impact on cooperation with surveys on (a) five different topics, using (b) samples from five different populations that have known attributes related to the topics, with
Article
Two hundred fourteen manipulations of the independent variables in 98 mailed questionnaire response rate experiments were treated as respondents to a survey, yielding a mean final response rate of 60.6% with slightly over two contacts. The number of contacts and the judged salience to the respondent were found to explain 51% of the variance in final response. Government organization sponsorship, the type of population, the length of the questionnaire, questions concerning other individuals, the use of a special class of mail or telephone on the third contact, and the use of metered or franked mail on the outer envelope affected final response independent of contacts and salience. A causal model of the final response rate, including initial response, explaining 90% of the variance, and a regression equation predicting final response rates are presented to show that high response rates are achievable by manipulating the costs of responding and the perceived importance of both the research and the individual response.
Book
Exclusively combining design and sampling issues, Handbook of Web Surveys presents a theoretical yet practical approach to creating and conducting web surveys. From the history of web surveys to various modes of data collection to tips for detecting error, this book thoroughly introduces readers to the this cutting-edge technique and offers tips for creating successful web surveys. The authors provide a history of web surveys and go on to explore the advantages and disadvantages of this mode of data collection. Common challenges involving under-coverage, self-selection, and measurement errors are discussed as well as topics including: Sampling designs and estimation procedures Comparing web surveys to face-to-face, telephone, and mail surveys Errors in web surveys Mixed-mode surveys Weighting techniques including post-stratification, generalized regression estimation, and raking ratio estimation Use of propensity scores to correct bias Web panels Real-world examples illustrate the discussed concepts, methods, and techniques, with related data freely available on the book's Website. Handbook of Web Surveys is an essential reference for researchers in the fields of government, business, economics, and the social sciences who utilize technology to gather, analyze, and draw results from data. It is also a suitable supplement for survey methods courses at the upper-undergraduate and graduate levels.
Article
This paper describes a German study which compared eight ways of recruiting members for an online access panel. Two thousand respondents, divided into four groups of 500, were invited to sign up with the panel via email, fax, flier or letter. Half of each sample's invitations offered a cash lottery, into which new panellists would be entered, whereas the other half of the invitations did not offer a lottery. Overall, email was the most successful means of solicitation, followed by flier and fax, which were equally efficient. Very few panellists were recruited via letter. The lottery was effective only with fliers. The composition of the recruited samples differed according to solicitation method. Fax-recruited individuals were older than those recruited by flier and email. Panellists recruited via email had been using the internet longer than flier- and fax-recruited panellists and they used the internet more often than those recruited via fax. After their recruitment, panellists were followed up in the first two studies run in the panel. The probability of their taking part in these studies and of completing these questionnaires was independent of the method by which they had been recruited.
Article
This study reports the findings of a comparison between traditional and online data collection methods. Respondents were recruited in four different ways, namely from an online opt-in panel, via website pop-ups, by postal mail and by telephone. The response patterns from different data collection methods relating to a variety of subjects (e.g. internet use, technology adoption, attitudes, interests and opinions, demographics) are compared. The results indicate that all sampling methods generate different results (also between postal and telephone research) when not controlling for socio-demographics from the national population. Once controlling for such factors, online and offline data collection methods generate similar results in terms of socio-demographics, attitudes, interests and opinions. Although some differences remain they can not be attributed to one or the other recruitment method. Correcting post hoc via reselection reduces the differences considerably in terms of technology adoption, while clear differences remain in terms of internet usage behaviour. Post hoc reselection showed to be more effective than reweighing for technological topics.
Article
A key characteristic of Web surveys is their diversity. Unlike other modes of data collection, where the method tells us something about both the sampling process and the method of data collection, the term “Web survey” is too broad to give us much useful information about how the study was carried out. For example, referring to an RDD telephone survey describes both the method of sampling (in part) and the mode of data collection. But there are so many different ways to identify sampling frames for Web surveys, to invite people to complete such surveys, and to administer surveys over the Internet (see Couper 2000) that the term “Web survey” conveys little evaluative information. The implications of this diversity are twofold. First, broad generalizations or claims about Web surveys relative to other methods of data collection are ill-advised. Second, much more detail about the process is needed in order for the reader to make judgments about the quality of the process itself or about the resulting data. The papers in this special issue reflect some of the many ways that the Internet can be used—whether alone or in combination with other methods—to conduct surveys. Despite their relatively short history, Web surveys have already had a profound effect on survey research. The first graphic browser (NCSA Mosaic) was released in 1992, with Netscape Navigator following in 1994 and Internet Explorer in 1995. The first published papers on Web surveys appeared in 1996. Since then, there has been a virtual explosion of interest in the Internet generally, and World Wide Web specifically, as a tool for survey data collection (see www.WebSM.org for a detailed bibliography). This is not to say that the early claims that Web surveys will make all other methods of data collection obsolete have come to pass. But it is fair to say that the methodological attention that Web surveys have received has exceeded other modes in a similar time period. In part, this is because the relative cost of Web surveys makes them a more accessible method of data collection than telephone or face-to-face surveys. In addition, the computerized nature of Web surveys facilitates conducting
Article
This study examines four theoretical frameworks for explaining survey response behavior and their role in survey research. The results of a survey of 282 research practitioners in Asia-Pacific, North America, and Western Europe show that research practitioners in general are aware and do make use of the theories of cognitive dissonance, commitment and involvement, social exchange, and self-perception. Although the literature indicates that commitment and involvement have been used very little to explain methodological effects, the present study provides evidence to the contrary. A comparison of the results obtained from the three sample groups reveals some significant differences in the research practitioners’ perceptions of why people participate in surveys as well as in the survey design strategies they adopt. There also is evidence that survey design practices are associated with, and perhaps influenced by, the research practitioners’ beliefs about why people participate in surveys.
Article
A comprehensive literature review of techniques used to increase response rates to questionnaires was conducted. Conclusions were based on arithmetic combination of 497 response rates found in 93 journal articles. Response rates were found to be increased by personal and telephone (versus mail) surveys, either prepaid or promised incentives, nonmonetary premiums and rewards, and increasing amounts of monetary reward. Other facilitators that increased responding were preliminary notification, foot-in-the-door techniques, personalization, and followup letters. Several facilitators were found to be ineffective. An estimated magnitude of effect was calculated for each design variation.
Article
Prepaid monetary incentives consistently exert the largest positive effect on response rates in mail surveys. For web-based surveys, it has not been possible to administer monetary incentives via the Internet in advance. Recently, several new web-based services have been introduced that can transfer money to people online. Does this really have the same positive effect on response rates as shown in traditional mail surveys? The authors investigated this question experimentally in the context of a web-based survey among members of a professional association in Virginia. The results indicate that prepaid incentives in web surveys seem to have no advantages concerning the willingness to participate, actual completion rates, and the share of incomplete response patterns when compared with postpaid incentives. Furthermore, postpaid incentives show no advantages over no incentives. Finally, compared to no incentives, prize draws increase completion rates and also reduce various incomplete participation patterns.
Article
Five experiments examined how participation in WWW-studies was influenced by framing the reception of an incentive as contingent on the completeness of the submitted questionnaire. Four e xperiments were carried out in a university-based online panel and one in a market research online panel. Four times the incentive was a prize draw an d once it was a personal gift. In each experiment, two conditions were contr asted: one group received an e-mail invitation mentioning that all p articipants are eligible for the incentive (= unconditional incentive), whereas the other group was told that only those participants who answer every quest ion in the questionnaire would receive the incentive (= contingent incentive ). Dependent measures were response rate, retention rate, number of omitt ed closed-ended items, length of answers to open-ended questions, and ster eotypical answering of grid-like question batteries. There were no signifi cant effects. The results of the individual experiments were then meta-analytica lly aggregated. It was revealed that contingent relative to unconditional incentives decrease response to a study, while at the same time the spa rser data are not compensated for by a superior data quality or reten tion.
Article
Two incentive experiments were conducted in different online access panels. Experiment 1 was carried out in a commercial market research panel. It examined whether three different types of promised incentives (redeemable bonus points, money lottery and gift lottery), four different amounts of bonus points or raffled money, and two different denominations of raffled money influenced response quantity, sample composition, response quality and survey outcome. Type of incentive and number of bonus points mildly influenced dropout and sample composition. Moreover, response was higher with bonus points than with the two types of lotteries. Response quality and survey outcome were not affected. Experiment 2 was conducted in a non-profit panel, which holds one half self- selected and one half non-self-selected participants. Incentives were two different amounts of raffled money in two different denominations. Response, dropout, response quality, survey outcome and sample composition were not affected. Based on a cost-benefit analysis, recommendations for employing incentives in online access panels are given.
Article
Based on their success at predicting the outcome of elections, opinion polls are used by the media, government and the political parties to measure public attitudes to a very wide range of other issues, helping to shape policy proposals and inform debate. Despite their importance within the political process, the media, political parties and pressure groups nevertheless want feedback from opinion polls quickly and cheaply. Large-scale random probability surveys may provide the best-quality data but fail miserably on speed and cost. Among practical survey methods the telephone has become the most commonly used mode of interview for general population opinion polls despite some doubts over response rates. Online polls have an increasing share of the market, despite the obvious drawbacks of relatively low internet penetration and the fact that they rely on panels of willing participants. This paper shows that while telephone polls produce answers that are similar to those obtained from large-scale random surveys there are some sharp differences in results obtained online. The paper shows that these differences cannot be removed by weighting by demographics, newspaper readership or by using attitudinal variables. The research does, however, uncover evidence of significant and disturbing mode effects. In particular, a growing band of professional online panel members seem to race through online surveys, giving responses that explain a good measure of the differences between online and telephone research. These findings suggest that the online research industry needs to devise methods to ensure online respondents carefully consider the answers they give, and design questions and answer codes that do not inadvertently lead online respondents to certain answers.
Article
Designing Effective Web Surveys is a practical guide to designing Web surveys, based on empirical evidence and grounded in scientific research and theory. It is designed to guide survey practitioners in the art and science of developing and deploying successful Web surveys. It is not intended as a "cook book"; rather it is meant to get the reader to think about why we need to pay attention to design. It is intended for academic, government, and market researchers who design and conduct Web surveys.
Article
Many factors are believed to affect mail survey response behavior and therefore create both nonresponse and response biases. The factor experimentally investigated in this study was the Ss' level of interest in the survey topic. 1,152 adult participants of an amateur bowling tournament were mailed one version of a questionnaire, and 579 were mailed another version. The difference between the 2 versions was the presumed topic of the questionnaires. Results verify the dramatic impact that Ss' interest in the topic can have on response rates: Ss were almost twice as likely to participate if the survey dealt with a higher-interest topic than if the topic were of less interest. The study also investigated other response behavior: Relative to lower-interest Ss, higher-interest Ss were less likely to omit answers to specific questions but did not differ significantly in the internal consistency of their responses or in their speed of response. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The decision process when requested to participate in a Web survey is understood most appropriately by applying a psychological theory of human action. Consequently, this study utilized an extended version of Ajzen's theory of planned behavior to predict and explain the number of participations in a five-wave Web-based panel study. Based on this model, the determinants of unit nonresponse in Web-based surveys are one's attitude toward participating in Web surveys, internalized social pressure, perceived behavioral control, and extent of moral obligation toward participating. The results indicate a satisfactory predictive power of the model. Perceived behavioral control and attitude toward participation predict the intention to participate best, followed by internalized social pressure and moral obligation. The theoretical perspective pursued proved to be valuable in terms of its predictive and explanative power as well as its practical value for Web-based survey research. © 2005 Wiley Periodicals, Inc.
Article
Dropouts can be a significant problem in web surveys, but theoretically motivated studies of this problem are rare. In this study, we use a dynamic theory of decision making, the decision field theory, to predict and explain behavior of respondents in a web survey. By registering respondents' momentary subjective experiences throughout the survey, we gained some insights into antecedents and consequences of respondents' decision to drop out. The results show that interest and experienced burden change throughout the survey, depending on characteristics of questions, respondents, and survey design. Respondents who drop out often express lower interest and higher experienced burden than the respondents who stay. Their growing preference for dropout can be detected in the decreased quality of their answers even before the point of dropout. The results could help in practical work and open new paths to theoretical explanations of survey behavior.
Article
The current study investigated the role of relational challenges as reported by 309 protégés in various stages and types of mentoring relationships. The Mentoring Relationship Challenges Scale (MRCS) was newly constructed using the results of an earlier qualitative study (Ensher & Murphy, 2005). The scale measured three factors of relational challenges which were: Demonstrating Commitment and Resilience, Measuring Up to a Mentor's Standards, and Career Goal and Risk Orientation. The results demonstrated that with respect to mentoring stages, those protégés in the beginning stages of their relationships reported experiencing significantly fewer challenges related to Demonstrating Commitment and Resilience than those in the mature or ending stages of the relationship. Also, it was found that the type of mentoring relationship (traditional, step-ahead, or peer) affected the prevalence of the three types of challenges. Protégés in peer relationships reported significantly fewer of all three types of challenges than those in step-ahead or traditional relationships. However, contrary to predictions, there were no significant differences found between those in informal versus those in formal mentoring relationships. As expected, protégé and mentor gender interacted significantly. Female protégés reported experiencing significantly fewer challenges related to the factor of Measuring Up to a Mentor's Standards, than did male protégés. Also, female protégés reported experiencing a significantly higher degree of relational challenges related to Career Goal and Risk Orientation from their male mentors than from their female mentors. Finally, after controlling for perceptions of career and psychosocial support for protégés in traditional mentoring relationships, two of the three relational challenges factors remained significant and explained a significant amount of variance in overall satisfaction with the mentoring relationship. This suggests that relational challenges, at least for traditional mentoring relationships, serve as an important mechanism to impact overall relationship satisfaction.
Article
Web panels are widely employed to conduct marketing research surveys, yet little is known regarding why consumers join web panels or participate in web surveys. The present research investigated the effects of individuals’ motivational traits on whether they joined web panels, participated in surveys upon joining, and the effort they put into their responses. A longitudinal study employing population profiling gathered personality measures from the entire population of potential panelists (N = 751) and invited them to join a web panel. Those accepting (N = 503) were sent a series of six marketing research surveys. Results revealed that consumers’ need for cognition, curiosity, agreeableness and extraversion were significant predictors of joining the web panel. The first three traits also predicted survey participation, as did openness to experience. Among participants, response effort was affected the greatest by curiosity, extraversion, and conscientiousness. An additional experiment, conducted with 327 participants, ruled out a selection bias explanation for some results. These findings provide useful insights to researchers using web panels, and point out limitations with using strictly demographics-based weighting schemes when selecting web panels.