Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Some respondents of online surveys click responses at random. Screeners or instructional manipulation checks (IMC) have become customary for identifying this strong form of satisficing. This research first analyzes the factors that condition IMC failures using an online panel survey carried out in Spain (2011–2015). Our data show that the probability of passing a screener depends mainly on the screener’s difficulty, the individuals’ intrinsic motivations for answering the survey, and past failures. We then address the substantive consequences of omitting those who fail to pass IMCs. We find that this strategy introduces an additional source of bias in descriptive analyses. The article ends with a discussion of the implications that these findings have for the use of IMCs.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Greszki et al. [55] found that the exclusion of too fast respondents did not significantly alter the results of their substantive models. This finding is further substantiated by Anduiza, and Galias [62] and Gummer et al. [53], who found that excluding inattentive participants did not significantly contribute to improvement in the fit of their explanatory models. Gummer et al. [53] further noted that they would have drawn the same substantive conclusions from each of the four models with and without inattentive respondents [53]. ...
... A large body of scholarship has also advised against eliminating respondents who did not fully follow the instructions and failed attention checks [51,58,62,63]. Some recent studies have raised a concern that eliminating participants who fail attention checks might lead to a demographic bias, threaten external validity, and limit the generalizability of study findings if participants of a specific demographic are more likely to fail attention checks compared to others [48]. ...
... Despite the increasing popularity of the Facebook advertising platform for recruiting study participants, surprisingly, very few studies exist on the use of attention checks and how they may be used to identify respondents who provide answers with poor quality and less effort. The bulk of studies using attention checks have used samples drawn from non-representative samples [61], large scale panel online (or offline) surveys [53,62], or a comparison of both [45,[64][65][66]. Many of these samples are more experienced in completing surveys, and the data sources are usually of high quality. ...
Article
Full-text available
Multiple studies have successfully used Facebook’s advertising platform to recruit study participants. However, very limited methodological discussion exists regarding the magnitude of low effort responses from participants recruited via Facebook and African samples. This study describes a quasi-random study that identified and enrolled young adults in Kenya, Nigeria, and South Africa between 22 May and 6 June 2020, based on an advertisement budget of 9,000.00 ZAR (US $521.44). The advertisements attracted over 900,000 views, 11,711‬ unique clicks, 1190 survey responses, and a total of 978 completed responses from young adults in the three countries during the period. Competition rates on key demographic characteristics ranged from 82% among those who attempted the survey to about 94% among eligible participants. The average cost of the advertisements was 7.56 ZAR (US $0.43) per survey participant, 8.68 ZAR (US $0.50) per eligible response, and 9.20 ZAR (US $0.53) per complete response. The passage rate on the attention checks varied from about 50% on the first question to as high as 76% on the third attention check question. About 59% of the sample passed all the attention checks, while 30% passed none of the attention checks. Results from a truncated Poisson regression model suggest that passage of attention checks was significantly associated with demographically relevant characteristics such as age and sex. Overall, the findings contribute to the growing body of literature describing the strengths and limitations of online sample frames, especially in developing countries.
... Another complication is that different researchers have used different IMCs. The short IMCs in Table 1 provide single-sentence instructions to select a specific category, or request respondents to skip the question (Anduiza & Galais, 2017;Liu & Wronski, 2018), possibly by clicking somewhere else on a page (Oppenheimer et al., 2009). Other short IMCs present statements that have an obviously correct answer (Peer et al., 2017). ...
... Other short IMCs present statements that have an obviously correct answer (Peer et al., 2017). The long IMCs present multiple-lined texts followed by an instruction to not select an answer-category but instead to click somewhere else (Anduiza & Galais, 2017;Oppenheimer et al., 2009). Other long IMCs instruct respondents to select a category and add a number to this category (Peer et al., 2017) or to select multiple categories (Liu & Wronski, 2018). ...
... Other long IMCs instruct respondents to select a category and add a number to this category (Peer et al., 2017) or to select multiple categories (Liu & Wronski, 2018). We adopted the terms "short" and "long" IMCs from Anduiza and Galais (2017) who found that the number of words strongly determines IMC-failure rates, more so than respondents' characteristics, survey length, or the place of the IMC in the survey. In all the papers that are included in Table 1, higher failure rates were reported on the longer IMCs. ...
... In recent years, the issue of careless responding has been increasingly investigated in the field of social and political science where online surveys are usually administered to heterogeneous respondent samples gathered from voluntary opt-in panels of commercial survey institutes [e.g., Miller & Baker-Prewitt, 2009, Hauser & Schwarz, 2015, Hauser et al., 2017, Berinsky et al., 2014, Mancosu et al., 2019, Gummer et al., 2018, Study 1 and Study 2, Anduiza & Galais, 2016], or probability-based online access panels recruited offline [Gummer et al., 2018 Study 3], which are attributed higher data quality than non-probability samples [e.g., Yeager et al., 2011]. All these studies identified subjects engaged in 3 Speeding is considered problematic for data quality, since conscientious answers are unlikely with only marginal response time [Gummer et al., 2018]. ...
... This impression may become all the more established as monetary incentives do not seem to make a difference. Anduiza and Galais [2016] find evidence that respondents' (self-reported) motivation for participation due to paid material incentives in their longitudinal correlational study (six-wave panel) did not increase the likelihood of failing attention checks. These authors take this result as "additional support for the idea that material incentives are not a problem, nor (.) 'professional' respondents" [Anduiza & Galais, 2016, 514]. ...
... Empirical evidence also points to the fact that incentives in web surveys increase response rates particularly of less-motivated respondents [Ernst Stähli & Joye, 2016] and that subjects who access online surveys for whatever reason are more likely to finish them when an incentive is offered than when not [Göritz, 2006]. At the same time, previous studies on careless responding found that attention checks failures are promoted by situational factors affecting respondents' motivation to respond adequately, such as their lack of interest in the survey topic [e.g., Gummer et al., 2018, Anduiza & Galais, 2016, Maniaci & Rogge, 2014. These findings 6 Mancosu et al. [2019] applied screener questions. ...
Article
Full-text available
In this paper, we examine rates of careless responding and reactions to detection methods (i.e., attention check items and instructions) in an experimental setting based on two different samples. First, we use a quota sample (with monetary incentive), a central data source for internet-based surveys in sociological and political research. Second, we include a voluntary opt-in panel (without monetary incentive) well suited for conducting survey experiments (e.g., factorial surveys). Respondents’ reactions to the detection items are analyzed by objective, nonreactive indicators (i.e., break-off, item nonresponse, and measurement quality), and two self-report scales. Our reaction analyses reveal that the detection methods we applied are not only well suited for identifying careless respondents, but also exert a motivational rather than a demotivating influence on respondents’ answer behavior and, hence, contribute to data quality. Furthermore, we find that break-off behavior differs across both samples suggesting that results from methodological online studies on the basis of incentivized samples do not necessarily transfer to online studies in general.
... Regarding data quality, it has been argued that participants' lack of motivation (i.e., amotivation) is a primary reason for inattentive or low effort responding (Huang, Curran, Keeney, Poposki, & DeShon, 2012). In addition, students' external motivation (e.g., doing research for extra credit) has been found to predict less attentive responding (Maniaci & Rogge, 2014), whereas participants' intrinsic motivation and interest in the topic being studied associates with more attentive responding (Anduiza & Galais, 2017;Maniaci & Rogge, 2014). Therefore, students' motivation for participating in psychological research may have implications for data quality. ...
... Psychological research relies heavily on college students for data (Arnett, 2008;Henrich et al., 2010) and self-determined motivation is an important individual difference variable which associates with, and predicts, important outcomes (Anduiza & Galais, 2017;Deci & Ryan, 2000;Fernet et al., 2012;Keatley et al., 2013;Maniaci & Rogge, 2014;Vansteenkiste et al., 2004). However, a comprehensive examination of student motivation to participate in psychological research seems to be lacking. ...
... Previous research has linked intrinsic or autonomous motivation to conscientiousness (Phillips et al., 2003), depth of processing (Vansteenkiste et al., 2004), and respondents' attention during research studies (Anduiza & Galais, 2017;Maniaci & Rogge, 2014). In addition, autonomous motivation to participate in research should relate to seeing value in psychological research. ...
Article
In a series of four studies, I developed and found evidence supporting the validity of a new measure, the Motivation to Participate in Psychological Research Scale (MPPRS). Based upon the tenets of Self-Determination Theory and aimed at measuring motivation in undergraduate students, the scale demonstrated a three-factor structure in exploratory and confirmatory factor analyses (Study 1: N = 238, Study 2: N = 264, Study 3: N = 297). Factors corresponded to autonomous motivation, controlled motivation, and amotivation. Preliminary evidence supported the validity of MPPRS scores, and subscales differentiated psychology majors from non-majors, as well as associated with the timing of research participation during the semester. Examining student motivation with the MPPRS has possible implications for data quality, as a moderator of research findings, and might be used to track changes in students’ interest regarding psychological research. However, future research is needed to assess the predictive validity of the MPPRS.
... On one hand, Berinsky and colleagues (2014) show that older, female and more educated respondents are more likely to pass screener questions. On the other, Anduiza and Galais (2017) find that the educational ...
... Psycholinguistic literature (Tourangeau et al., 2000;Lenzner et al., 2010) shows that the length and the syntactic complexity of a question may lead to difficulties in understanding the question; this, in turn, can have a significant impact on data quality (Christian et al., 2007). Previous methodological literature has partially confirmed these results: by using a survey experiment conducted with a relatively small sample (a few hundred cases overall), Liu and Wronski (2018) demonstrate that the length of a screener is negatively associated with the possibility of passing it (see also Anduiza and Galais, 2017). Also, the authors show that people who are able to pass more difficult screeners present a quality of the responses which is not significantly different from those who pass the easier ones. ...
... The first hypothesis concerns the relation between cognitive load and the likelihood of correctly accomplishing the task requested in the screener. As underlined above, the complexity of a screener should influence the cognitive load requested of the respondent and, thus, the likelihood of accomplishing the task (Anduiza and Galais, 2017;Liu and Wronski, 2018). Starting with this consideration, it is possible to present the hypothesis as follows: ...
Article
Full-text available
In online surveys, the control of respondents is almost absent: for this reason, the use of screener questions or “screeners” has been suggested to evaluate respondent attention. Screeners ask respondents to follow a certain number of instructions described in a text that contains a varying amount of misleading information. Previous work focused on adhoc experimental designs composed of a few questions, generally administered to small samples. Using an experiment inserted into an Italian National Election Study survey (N¼3,000), we show that short screeners – namely, questions with a reduced amount of misleading information – should be preferred to longer screeners in evaluating the attentiveness of respondents. We also show there is no effect of screener questions in activating respondent attention.
... The percentage of correct responses to the attention check is reported as a metric of the quality of data rather than excluding participants [26]. ...
... In our pre-registered analyses, we set out to investigate whether the average speed of completing trials influenced the likelihood that a participant would correctly respond to the attention check. This analysis was planned to assess whether our catch-trial served as an accurate method of an attention check so that completion rates could be reported as a measure of data quality [26] However, very few participants responded incorrectly to the attention check, meaning there was insufficient variation in the outcome to perform this analysis. ...
Article
Full-text available
Studies of food-related behaviours often involve measuring responses to pictorial stimuli of foods. Creating these can be burdensome, requiring a significant commitment of time, and with sharing of images for future research constrained by legal copyright restrictions. The Restrain Food Database is an open-source database of 626 images of foods that are categorized as those people could eat more or less of as part of a healthy diet. This paper describes the database and details how to navigate it using our purpose-built R Shiny tool and a pre-registered online validation of a sample of images. A total of 2150 participants provided appetitive ratings, perceptions of nutritional content and ratings of image quality for images from the database. We found support for differences between Food Category on appetitive ratings which were also moderated by state hunger ratings. Findings relating to individual differences in appetite ratings as well as differences between BMI weight categories are also reported. Our findings validate the food categorization in the Restrain Food Database and provide descriptive information for individual images within this investigation. This database should ease the burden of selecting and creating appropriate images for future studies.
... As for the IMC, which is commonly used in studies focusing on satisficing [2], [12], [29], [30], we decided that the IMC could not be used as a satisficing indicator and did not use it because the questionnaire did not include questions with a special instruction adopting an IMC. Figure 3 shows a schematic of the questionnaire used in this experiment. The questionnaire, which asks respondents their personalities, mindsets, motivation for completing the questionnaire, and self-assessment of satisficing tendencies, such as "whether complete your answers as quickly as possible?" ...
... Excluding careless responses from the sample increases the internal validity of the survey because this exclusion reduces noise, but it also reduces the diversity of the sample, which may compromise the external validity. To avoid such trade-offs, some believe that it is preferable to have a policy of transforming nonsatisficing respondent data to valid data by providing some intervention for satisficing respondents [12], [29]. Oppenheimer et al. [2] conducted an experiment in which people who violated the IMC were repeatedly redirected to the same IMC until they cleared it. ...
Article
Full-text available
Some respondents make careless responses due to the “satisficing,” which is an attempt to complete a questionnaire as quickly and easily as possible. To obtain results that reflect a fact, detecting satisficing and excluding the responses with satisficing from the analysis targets are required. One of the devised methods detects satisficing by adding questions that check violations of instructions and inconsistencies. However, this approach may cause respondents to lose their motivation and prompt them to satisficing. Additionally, a deep learning model that automatically answers these questions was reported. This threatens the reliability of the conventional method. To detect careless responses without inserting such screening questions, machine learning (ML) detection using data obtained from answer results was attempted in a previous study, with a detection rate of 55.6%, which is not sufficient from the viewpoint of practicality. Therefore, we hypothesized that a supervised ML model with a higher detection rate could be constructed by using on-screen answering behavior as features. However, (1) no existing questionnaire system can record on-screen answering behavior and (2) even if the answering behavior can be recorded, it is unclear which answering behavior features are associated with satisficing. We developed an answering behavior recording plug-in for LimeSurvey, an online questionnaire system used all over the world, and collected a large amount of data (from 5,692 people) in Japan. Then, a variety of features were examined and generated from answering behavior, and we constructed ML models to detect careless responses.We call this detection method the ML-ABS (ML-based answering behavior scale). Evaluation by cross-validation demonstrated that the detection rate for careless responses was 85.9%, which is much higher than the previous ML method. Among the various features we proposed, we found that reselecting the Likert scale and scrolling particularly contributed to the detection of careless responses.
... In addition to measuring the attention a respondent pays to a single survey (Berinsky et al., 2014;Anduiza and Galais, 2017), IMCs could even be employed as measures of the attention a respondent pays, in general, when answering surveys. In other words, they could also measure a general individual predisposition. ...
... To face this issue, previous research focused on the calibration of the text of the IMCs, by analysing the working of IMCs which differ in their cognitive load, namely, in their length. Besides showing that the length of an IMC is negatively associated with the likelihood of passing it (Anduiza and Galais, 2017;Liu and Wronski, 2018;Mancosu et al., 2019), the main conclusion of those studies is that short IMCs, more similar to other survey questions, are more appropriate to identify inattentive respondents (Liu and Wronski, 2018;Mancosu et al., 2019). ...
Article
Full-text available
In online surveys, the use of manipulation checks aimed at measuring respondents’ attentiveness has become common. More than being measures of attentiveness pertaining to a specific survey, instructional manipulation checks (IMC) could work as generic measures of the quality of the answers a person gives when completing a questionnaire. By using several waves of the ITANES (Italian National Election Study) – University of Milan online panel survey, the article shows that the outcome of an IMC predicts the quality of the answers a respondent gives in previous and subsequent waves of the panel. Moreover, through a survey experiment that randomizes the length of an IMC we show that, overall, the answers’ quality of ‘attentive’ respondents assigned to different IMCs do not substantially vary. Findings also show that IMCs are reliable measures, as the outcome of two IMCs placed in two consecutive waves proved to be highly associated.
... The concept of satisficing describes human's tendency to meet minimum criteria for adequacy rather than for optimization (Anduiza & Galais, 2017). Answering surveys requires respondents' substantial cognitive efforts. ...
... To produce high-quality data, respondents need to go through four stages of cognitive processes: (1) carefully interpret the meaning of the questions; (2) retrieve relevant information from memories; (3) make a judgment based on the integration of the retrieved information and (4) report an accurate answer (Tourangeau & Rasinski, 1988). According to the theory of satisficing, survey respondents who satisfice tend to skip or disregard one or more of these four stages (Anduiza & Galais, 2017;Barge & Gehlbach, 2012), resulting in suboptimal answers and lower data quality. That is, respondents who satisfice are less careful to interpret questions, less thoroughly search memories, less effortful to integrate information and more casually reporting their answers (Krosnick, 1991). ...
Article
Using mobile devices to complete web-based surveys is an inescapable trend. Given the growth of this medium, some researchers are concerned about whether mobile devices are a viable channel for administering self-report online surveys. Taking two online surveys respectively using the US and China samples, this study compared the responses quality between participants responding via mobile devices and via PCs. Results from both the US and China samples revealed that although mobile respondents took longer to complete surveys than PC respondents, response quality did not differ significantly between these groups. Several behaviour patterns among mobile respondents were also identified in both samples. These findings provide practical implications to optimize web-based surveys for mobile users in tourism and hospitality research.
... We did not use attention check questions as per advice from recent research on survey methodology (Berinsky et al., 2014;Clifford and Jerit, 2015;Anduiza and Galais, 2017). Such questions may increase Social Desirability Bias (Clifford and Jerit, 2015), which is an important issue for surveys related to privacy. ...
... Such questions may increase Social Desirability Bias (Clifford and Jerit, 2015), which is an important issue for surveys related to privacy. Discarding responses based on attention check questions can introduce demographic bias related to gender, age and education (Berinsky et al., 2014;Clifford and Jerit, 2015;Anduiza and Galais, 2017), which can impact nationally representative surveys. Lastly, our pilot results indicated that the median completion time was short ∼3min, which reduces the decline in attention due to satisfying behavior. ...
Article
Full-text available
Understanding user privacy expectations is important and challenging. General Data Protection Regulation (GDPR) for instance requires companies to assess user privacy expectations. Existing privacy literature has largely considered privacy expectation as a single-level construct. We show that it is a multi-level construct and people have distinct types of privacy expectations. Furthermore, the types represent distinct levels of user privacy, and, hence, there can be an ordering among the types. Inspired by expectations-related theory in non-privacy literature, we propose a conceptual model of privacy expectation with four distinct types – Desired, Predicted, Deserved and Minimum. We validate our proposed model using an empirical within-subjects study that examines the effect of privacy expectation types on participant ratings of privacy expectation in a scenario involving collection of health-related browsing activity by a bank. Results from a stratified random sample (N = 1,249), representative of United States online population (±2.8%), confirm that people have distinct types of privacy expectations. About one third of the population rates the Predicted and Minimum expectation types differently, and differences are more pronounced between younger (18–29 years) and older (60+ years) population. Therefore, studies measuring privacy expectations must explicitly account for different types of privacy expectations.
... People with different levels of education may have different understandings of their lives, different standards for evaluating their lives, different interpretations of the SWLS items, or they may have different familiarity with or ease of answering the survey questions. For example, people with different levels of education respond differently to instructional manipulation checks [10,11] or show different levels of inattention when answering surveys [12]. Research in other fields, such as human values [13], have found evidence of non-invariance across educational levels. ...
Article
Full-text available
The purpose of this study was to examine measurement invariance of the Dutch version of the Satisfaction with Life Scale between groups based on gender, age, education, perceived difficulty of the survey, perceived clarity of the survey, and national background. A nationally representative Dutch sample was used (N = 5369). Multiple-groups confirmatory factor analysis was conducted to test measurement invariance. Full metric and scalar invariance were supported for all groups studied. These results indicate that the items of the scale are understood and answered similarly by all groups. Therefore, the 5 items of the Satisfaction with Life Scale measure the same construct in all groups. In other words, the differences in the life satisfaction scores are indicative of actual differences in life satisfaction rather than measurement artifacts and biases. This means that the levels of life satisfaction can be meaningfully compared between groups in The Netherlands.
... Loss of data due to failed AMCs is not uncommon in this field and others (Abbey & Meloy, 2017;Cullen & Monds, 2020;Ruva & Sykes, 2022;Salerno et al., 2021). Scholars have debated the use of AMCs, with some evidence suggesting they improve data quality (Abbey & Meloy, 2017;Shamon & Berning 2020) and others suggesting the opposite (Anduiza & Galais, 2017;Aronow et al., 2019). Another issue for data quality may be the source of the sample. ...
Article
Full-text available
Two studies examined the effectiveness of the Unconscious Bias Juror (UBJ) video and instructions at reducing racial bias in Black and White mock-jurors’ decisions, perceptions, and counterfactual endorsement in a murder (Study 1; N = 554) and battery (Study 2; N = 539) trial. Participants viewed the UBJ video or not, then read pretrial instructions (general or UBJ), a trial summary, and posttrial instructions (general or UBJ). In Study 1, juror race moderated the effect of defendant race on verdicts, culpability, and credibility. White, but not Black, jurors demonstrated greater leniency toward Black defendants for verdicts, culpability, and credibility. The UBJ video moderated the effect of defendant race on murder counterfactual endorsement. Only when the video was absent was jurors’ counterfactual endorsement higher for the White versus Black defendant, which mediated the effect of defendant race on White jurors’ verdicts. In Study 2, White jurors were more lenient regardless of defendant race. Instructions and juror race moderated the video’s effect on credibility ratings. The video only influenced Black jurors’ credibility ratings. In conclusion, the debiasing interventions were ineffective in reducing racial bias in jurors’ verdicts. However, they do impact aspects of juror attribution and may be effective with modification.
... In addition to distractions and poor attitudes, carelessness can impact the quality of data obtained from online questionnaires. This can result from lack of interest in the survey topic (Anduiza & Galais, 2017;Gummer, Roßmann, & Silber, 2021), poor survey incentives, or the anonymity of surveys that allows less motivated respondents to be less conscientious when completing the survey (Douglas & McGarty, 2001). Careless responses are a ubiquitous problem, with estimates of careless responding varying from 33% (Goldammer, Annen, Stöckli, & Jonas, 2020) to 46% (Oppenheimer et al., 2009), and as high as 78% in one recent study (Mancosu, Ladini, & Vezzoni, 2019). ...
Article
Methodology used in sensory and consumer research has changed rapidly over the past several decades, and consumer research is now frequently executed online. This comes with advantages, but also with the potential for poor quality data due to uncontrolled elements of survey administration and execution. Published papers that utilize online data rarely report metrics of data quality or note precautions that were taken to ensure valid data. The aim of this paper is to raise awareness of the factors that influence online data quality. This is achieved by 1) identifying factors that can impact the reliability and validity of consumer data obtained with online questionnaires, 2) highlighting indices of online questionnaire data quality that can be used by researchers to assess the likelihood that their data are of poor quality, and 3) recommending a number of indices, counter-measures and best practices that can be used by sensory and consumer researchers to assess online questionnaire data quality and to ensure the highest degree of reliability and validity of their data. By making researchers aware of these sources of invalidity and by presenting available remedies and good practices, it is our intention that sensory and consumer scientists will adopt these measures and report them as a routine part of publishing online questionnaire data. Specific suggestions are offered regarding the important elements of data quality that should be included in scientific manuscripts.
... To increase data quality, we implemented several quiz questions and attention checks in the experiment, which needed to be passed to be able to continue (see the instructions in the Appendix). Anduiza & Galais (2017) find that excluding participants who did not immediately pass attention checks can decrease the data quality. Therefore, we did not screen out participants for giving wrong answers in the attention checks, but let them proceed only once they had given the correct answer. ...
Article
Full-text available
In this paper we investigate the generalizability of the role of unequal opportunities and social group membership in redistributive preferences and examine the interaction between these two dimensions. We present results from a large-scale online experiment with more than 4,000 participants from Germany. The experiment consists of a real-effort task and a subsequent dictator game with native Germans and immigrants to Germany. We find that dictator transfers to the own group by native Germans and immigrants are higher under unequal opportunities than under equal opportunities. While we confirm the main findings reported in previous literature regarding the role of inequality of opportunity in redistribution for native Germans and immigrants, we find distinctively different patterns between both groups concerning the influence of social group membership and its interaction with unequal opportunities on redistribution. In particular, contrary to natives, immigrant dictators transfer more to in-group than to out-group receivers under unequal opportunities and do not compensate for unequal opportunities of out-group members. We conclude that in order to increase the understanding of patterns reported in the literature, it is crucial to also investigate the generalizability of findings to individuals from the general population and to explicitly cover participants such as immigrants who represent important parts of our society.
... Of course, many of the challenges of online survey research remain. Long questionnaires can be hampered by a tendency of respondents to rush and answer carelessly (Anduiza and Galais 2017). However, assessments of data quality by analysing the response time of participants can help to identify issues in this regard (Geldsetzer 2020). ...
Article
COVID-19 presents significant challenges to society and to social scientists in their attempts to understand the unfolding consequences of the pandemic. This article examines how the UJ/HSRC COVID-19 Democracy survey responded to these challenges by conducting a series of rapid-response non-probabilistic surveys using a mass membership online data-free platform, known as the Moya messenger app. While not without its limitations, we argue that the narrowing “digital divide” in South Africa means that online survey research is of increasing utility to researchers under the conditions of the pandemic and beyond. By offering insight into the technicalities of designing, translating and fielding the survey we aim to share insights into best practice that can further develop online survey research in South Africa. In particular, we reflect upon why the river sampling offered by the Moya messenger app was favoured over online panel data. This leads into a discussion of the process of weighting the data to replicate the national population, and the potential biases among participants versus non-participants in the surveys. The article then moves on to illustrate how the findings were used to provide evidence to policymakers and “voice” to adults living in South Africa about their experiences of the pandemic and their views on policy responses. The article considers how the research contributed to the public discourse around the pandemic response in 2020, including the public’s views on various pandemic policy decisions, school closures and pandemic fatigue.
... Because of these challenges, we ensured that our sample provided an adequate representation of the two focus populations (people with and without disabilities). In our pilot study, we intentionally chose not to include a screener or Instructional Manipulation Check (Oppenheimer et al., 2009) because these may induce bias (Clifford & Jerit, 2015;Anduiza & Galais, 2017). But our pilot data of about 200 individuals (half with a disability) demonstrated significant anomalies for those identifying as having a disability. ...
Article
The COVID-19 pandemic response has had a significant impact on the general population’s ability to participate in their communities. Individuals with disabilities, an already socially disadvantaged population, are more vulnerable to and have likely been disproportionately impacted by COVID-19 response conditions. Yet, the extent to which daily community living activities of people with disabilities have been impacted is unknown. Thus, this study assesses their travel behavior and community living during the COVID-19 pandemic conditions compared with those of the general population during the same period. A web survey was conducted using Qualtrics’s online panel data (respondents included 232 people with disabilities and 161 people without disabilities). Regression models found that people with disabilities reduced their daily travel to a greater extent but at varying degrees, depending on the destination types and travel modes. Reductions in taxi rides (including ride-hailing services) were most significant among people with cognitive and sensory (vision and hearing) disabilities. By place type, cognitive disability was associated with a trip reduction for multiple destination types—grocery, restaurants, outdoor recreation, indoor recreation, and healthcare providers. Findings from this study could contribute to decision- and policy-making in planning, transportation, and community services during the remainder of the COVID-19 pandemic, in future major public health crises, as well as post-COVID, because the adjustments in travel behavior and community living might be longer-term.
... Past research has differentiated attention checks into two types. One type of attention check is an instructional manipulation check (IMC), where there is a deliberate change in the instructions in a survey question designed to capture whether the respondent is reading and cognitively processing the question's instructions (Oppenheimer et al., 2009;Berinsky et al., 2014;Anduiza and Galais, 2017). An example of an IMC is adding a clause to a survey question instructing the respondent to ignore the question and provide a specific answer. ...
Preprint
Survey research methodology is evolving rapidly, as new technologies provide new opportunities. One of the areas of innovation regards the development of online interview best practices, and the advancement of methods that allow researchers to measure the attention that subjects are devoting to the survey task. Reliable measurement of subject attention can yield important information about the quality of the survey response. In this paper, we take advantage of an innovative survey we conducted in 2018, in which we directly connect survey responses to administrative data, allowing us to directly assess the association between survey attention and response quality. We show that attentive survey subjects are more likely to provide accurate survey responses regarding a number of behaviors and attributes that we can validate with our administrative data. The best strategy to deal with inattentive respondents, however, depends on the correlation between respondent attention and the outcome of interest.
... Loss of data due to failed AMCs is not uncommon in this field and others (Abbey & Meloy, 2017;Cullen & Monds, 2020;Ruva & Sykes, 2022;Salerno et al., 2021). Scholars have debated the use of AMCs, with some evidence suggesting they improve data quality (Abbey & Meloy, 2017;Shamon & Berning 2020) and others suggesting the opposite (Anduiza & Galais, 2017;Aronow et al., 2019). Another issue for data quality may be the source of the sample. ...
Preprint
Full-text available
Two studies examined the effectiveness of two implicit bias remedies at reducing racial bias in Black and White mock-jurors’ decisions. Participants were recruited through a Qualtrics Panel Project. Study 1 (murder trial; N = 554): Mage = 46.53; 49.1% female; 50% Black; 50.0% White. Study 2 (battery trial; N = 539): Mage = 46.46; 50.5% female; 49.5% Black; 50.5% White. Half of the participants viewed the UBJ video. Then participants read pretrial instructions (general or UBJ), trial summary, posttrial instructions (general or UBJ), and completed measures. Mock-juror race was expected to moderate the effect of defendant race (Black vs. White) on verdicts, sentences, culpability, and credibility, with jurors being more lenient toward same-race defendants. This interaction would be moderated by the unconscious bias juror (UBJ) video and instructions, reducing bias for White jurors only. Mock-jurors’ counterfactual endorsements would mediate race effects on verdicts. In Study 1, juror race moderated the effect of defendant race on verdicts, culpability, and credibility—White, but not Black, jurors demonstrated greater leniency for Black versus White defendants. The UBJ video moderated the effect of defendant race on murder counterfactual endorsement—when the video was present defendant race did not significantly affect endorsement. This endorsement mediated the effect of defendant race on White jurors’ verdicts. In Study 2, juror race influenced verdicts and sentences—White jurors were more lenient regardless of defendant race. The effect of juror race on sentence was qualified by the UBJ video—when present the effect of race was no longer significant. The UBJ remedies increased all mock jurors’ defendant credibility ratings. In conclusion, the debiasing interventions were ineffective in reducing racial bias in jurors’ verdicts. However, they do impact aspects of juror attribution and may be effective with modification.
... Although data quality can be improved through pretesting the instrument and piloting the survey, there are other data quality concerns unique to online research that should also be addressed. Respondents may satisfice, meaning they expend minimal effort on responses, due to online distractions, survey design issues, or desire for incentive payments (Anduiza & Galais, 2016). Survey bots or automatic form fillers may also complete surveys, leading to falsified data. ...
Article
Full-text available
The coronavirus (COVID-19) pandemic is affecting the environment and conservation research in fundamental ways. Many conservation social scientists are now administering survey questionnaires online, but they must do so while ensuring rigor in data collection. Further, they must address a suite of unique challenges, such as the increasing use of mobile devices by participants and avoiding bots or other survey fraud. We reviewed recent literature on online survey methods to examine the state of the field related to online data collection and dissemination. We illustrate the review with examples of key methodological decisions made during a recent national study of people who feed wild birds, in which survey respondents were recruited through an online panel and a sample generated via a project participant list. Conducting surveys online affords new opportunities for participant recruitment, design, and pilot testing. For instance, online survey panels can provide quick access to large and diverse samples of people. Based on the literature review and our own experiences, we suggest that to ensure high-quality online surveys one should account for potential sampling and nonresponse error, design survey instruments for use on multiple devices, test the instrument, and use multiple protocols to identify data quality problems. We also suggest that research funders, journal editors, and policy makers can all play a role in ensuring high-quality survey data are used to inform effective conservation programs and policies.
... Almost all students (96%) failed none or only one attention check. We did not exclude any respondents based on their attention check results [30,31], but rather took these results as evidence that the vast majority of students included in the analyses were sufficiently attentive to the survey. ...
Article
Full-text available
In March 2020, New York City (NYC) experienced an outbreak of coronavirus disease 2019 (COVID-19) which resulted in a 78-day mass confinement of all residents other than essential workers. The aims of the current study were to (1) document the breadth of COVID-19 experiences and their impacts on college students of a minority-serving academic institution in NYC; (2) explore associations between patterns of COVID-19 experiences and psychosocial functioning during the prolonged lockdown, and (3) explore sex and racial/ethnic differences in COVID-19-related experiences and mental health correlates. A total of 909 ethnically and racially diverse students completed an online survey in May 2020. Findings highlight significant impediments to multiple areas of students’ daily life during this period (i.e., home life, work life, social environment, and emotional and physical health) and a vast majority reported heightened symptoms of depression and generalized anxiety. These life disruptions were significantly related to poorer mental health. Moreover, those who reported the loss of a close friend or loved one from COVID-19 (17%) experienced significantly more psychological distress than counterparts with other types of infection-related histories. Nonetheless, the majority (96%) reported at least one positive experience since the pandemic began. Our findings add to a growing understanding of COVID-19 impacts on psychological health and contribute the important perspective of the North American epicenter of the pandemic during the time frame of this investigation. We discuss how the results may inform best practices to support students’ well-being and serve as a benchmark for future studies of US student populations facing COVID-19 and its aftermath.
... ). Some evidence suggests that excluding participants solely on the basis of a single attention check failure may result in bias(Anduiza & Galais, 2016;Berinsky, Margolis, & Sances, 2014;Hauser, Sunderrajan, Natarajan, & Schwarz, 2016;Miller & Baker-Prewitt, 2009). Hence, subjects who failed an attention check were escalated to a manual review of their data.In review, I examined the length of time a subject spent completing the questionnaire, the pattern of their responses (i.e. for scale items, was the same answer selected for every question?) and whether they failed any other attention checks. ...
Thesis
Research on terrorism is increasingly empirical and a number of significant advancements have been made. One such evolution is the emergent understanding of risk factors and indicators for engagement in violent extremism. Beyond contributing to academic knowledge, this has important real-world implications. Notably, the development of terrorism risk assessment tools, as well as behavioural threat assessment in counterterrorism. This thesis makes a unique contribution to the literature in two key ways. First, there is a general consensus that no single, stable profile of a terrorist exists. Relying on profiles of static risk factors to inform judgements of risk and/or threat may therefore be problematic, particularly given the observed multi- and equi-finality. One way forward may be to identify configurations of risk factors and tie these to the theorised causal mechanisms they speak to. Second, there has been little attempt to measure the prevalence of potential risk factors for violent extremism in a general population, i.e. base rates. Establishing general population base rates will help develop more scientifically rigorous putative risk factors, increase transparency in the provision of evidence, minimise potential bias in decision-making, improve risk communication, and allow for risk assessments based on Bayesian principles. This thesis consists of four empirical chapters. First, I inductively disaggregate dynamic person-exposure patterns (PEPs) of risk factors in 125 cases of lone-actor terrorism. Further analysis articulates four configurations of individual-level susceptibilities which interact differentially with situational, and exposure factors. The PEP typology ties patterns of risk factors to theorised causal mechanisms specified by a previously designed Risk Analysis Framework (RAF). This may be more stable grounds for risk assessment however than relying on the presence or absence of single factors. However, with no knowledge of base rates, the relevance of seemingly pertinent risk factors remains unclear. However, how to develop base rates is of equal concern. Hence, second, I develop the Base Rate Survey and compare two survey questioning designs, direct questioning and the Unmatched Count Technique (UCT). Under the conditions described, direct questioning yields the most appropriate estimates. Third, I compare the base rates generated via direct questioning to those observed across a sample of lone-actor terrorists. Lone-actor terrorists demonstrated more propensity, situational, and exposure risk factors, suggesting these offenders may differ from the general population in measurable ways. Finally, moving beyond examining the prevalence rates of single factors, I collect a second sample in order to model the relations among these risk factors as a complex, dynamic system. To do so, the Base Rate Survey: UK is distributed to a representative sample of 1,500 participants from the UK. I introduce psychometric network modelling to terrorism studies which visualises the interactions among risk factors as a complex system via network graphs.
... First, the reaction of respondents to attention checks is debated, and may introduce undesirable behaviours such as increased drop out (see, for example, Berinsky, Margolis, and Sances, 2016; Anduiza and Galais, 2017;Vannette and Krosnick, 2014). Second, our survey design is relatively simple and does not entail the kind of cognitive involvement required in more elaborate experiments, for instance conjoint designs. ...
Article
Full-text available
Contact tracing applications have been deployed at a fast pace around the world to stop the spread of COVID-19, and they may be a key policy instrument to contain future pandemics. This study aims to explain public opinion toward cell phone contact tracing using a survey experiment conducted with a representative sample of Canadian respondents. We build upon a theory in evolutionary psychology---disease avoidance---to predict how media coverage of the pandemic affects public support for containment measures. We report three key findings. First, exposure to a news item that shows people ignoring social distancing rules causes an increase in support for cell phone contact tracing. Second, pre-treatment covariates such as anxiety and a belief that other people are not following the rules rank among the strongest predictors of support for COVID-19 apps. And third, while a majority of respondents approve the reliance on cell phone contact tracing, many of them hold ambivalent thoughts about the technology. Our analysis of answers to an open-ended question on the topic suggests that concerns for rights and freedoms remain a salient preoccupation.
... instructional manipulation checks, or trap questions) from the sample. However, the use of attention checks is subject to debate, with some researchers pointing out that after such elimination procedures, the remaining cases may be a biased subsample of the total sample, thus biasing the results (Anduiza and Galais, 2016;Bennett, 2001;Berinsky et al., 2016). Experiments show that excluding participants who failed attention checks introduces a demographic bias, and attention checks actually induce low-effort responses or socially desirable responses (Clifford and Jerit, 2015;Vannette, 2016). ...
Article
Full-text available
Currently, there is little agreement as to how Natural Language Generation (NLG) systems should be evaluated, with a particularly high degree of variation in the way that human evaluation is carried out. This paper provides an overview of how (mostly intrinsic) human evaluation is currently conducted and presents a set of best practices, grounded in the literature. These best practices are also linked to the stages that researchers go through when conducting an evaluation research (planning stage; execution and release stage), and the specific steps in these stages. With this paper, we hope to contribute to the quality and consistency of human evaluations in NLG.
... Unfortunately, a large percentage of participants did not pass the quality check questions (IMC). Many factors may be related to the passing rate of IMC, such as the length and the context of the survey (3,62). Future research is needed to understand more about the quality of online survey responses. ...
Article
Full-text available
Consumers do not consider flour, a low-moisture food product, a high risk for microbial contamination. In the past 10 years, however, flour has been identified as a source of pathogenic bacteria, including Salmonella and Escherichia coli. Online surveys were conducted to study consumers' flour handling practices and knowledge about food safety risks related to flour. The survey also evaluated message impact on three food safety messages in communicating information and convincing consumers to adopt safe flour handling practices. Flour-using consumers (n = 1,045) from the United States reported they used flour to make cakes, cookies, and bread. Most consumers stored flour in sealed containers. Less than 1% kept a record of product identification numbers, such as lot numbers, and less than 11% kept brand and use-by date information. Many consumers (85%) were unaware of flour recalls, or outbreaks, and few (17%) believed they would be affected by flour recalls or outbreaks. If the recall affected the flour they bought, nearly half of the consumers (47%) would buy the same product from a different brand for a few months before they returned to the recalled brand. Among consumers who use flour to bake, 66% said they ate raw cookie dough or batter. Raw dough "eaters" were more difficult to convince to avoid eating and playing with raw flour than "noneaters." Food safety messages were less impactful on those raw dough eaters than noneaters. Compared with the food safety message with only recommendations, those messages with recommendations and an explanation as to the benefits of the practice were more effective in convincing consumers to change their practices. These findings provide insight into effective consumer education about safe flour handling practices and could assist in the accurate development of risk assessment models related to flour handling. Highlights:
... Whether researchers attempt to measure it or not, experiments fielded online will likely contain a sizable share of inattentive respondents. Respondents may, for example, be distracted during the experiment (Clifford and Jerit 2014), or simply "satisfice" as a means of completing the survey as quickly as possible to receive payment (Anduiza and Galais 2016;Krosnick, Narayan, and Smith 1996). Such inattentiveness represents a form of experimental noncompliance, which, as Harden, Sokhey and Runge (2019, 201) contend, "poses real threats to securing causal inferences and drawing meaningful substantive conclusions." ...
Preprint
Full-text available
Respondent inattentiveness threatens to undermine experimental studies. In response, researchers incorporate measures of attentiveness into their analyses, yet often in a way that risks introducing post-treatment bias. We propose a design-based technique—mock vignettes (MVs)—to overcome these interrelated challenges. MVs feature content substantively similar to that of experimental vignettes in political science, and are followed by factual questions (mock vignette checks [MVCs]) that gauge respondents’ attentiveness to the MV. Crucially, the same MV is viewed by all respondents prior to the experiment. Across five separate studies, we find that MVC performance is significantly associated with (1) stronger treatment effects, and 2) other common measures of attentiveness. Researchers can therefore use MVC performance to re-estimate treatment effects, allowing for hypothesis tests that are more robust to respondent inattentiveness and yet also safeguarded against post-treatment bias. Lastly, our study offers researchers a set of empirically-validated MVs for their own experiments.
... Although data quality can be improved through pretesting the instrument and piloting the survey, there are other data quality concerns unique to online research that should also be addressed. Respondents may satisfice, meaning they expend minimal effort on responses, due to online distractions, survey design issues, or desire for incentive payments (Anduiza & Galais, 2016). Survey bots or automatic form fillers may also complete surveys, leading to falsified data. ...
Preprint
The coronavirus (COVID-19) pandemic is impacting the environment and conservation research in fundamental ways. For conservation social scientists, the pandemic has necessitated swift changes to research methods, including shifting away from in-person data collection. Social survey data are key to integrating perspectives and knowledge from a variety of social actors in order to more effectively manage and conserve ecosystems. In-person survey methods have long been considered an indispensable approach for reaching certain populations (e.g., low-income), those without an available sampling frame (e.g., birders), or those defined by place (e.g., park visitors). However, these methods became infeasible for many researchers during the pandemic, as they may during other times of social upheaval. Additionally, response rates across multiple survey modes have been steadily declining over decades, requiring that researchers consider new approaches. Conservation social scientists are now turning to online surveys at a rapid rate, but they must do so while ensuring rigor in this data collection mode. Further, they must address a suite of unique challenges, such as the increasing use of mobile devices by participants and avoiding bots or other survey fraud. This paper charts a course for high-quality online survey research for conservation social scientists through review of recent literature and our own experiences as survey researchers. We provide direction for scientists moving their surveys online, with examples from a recent national study of people who feed wild birds, in which an online survey was implemented through a survey panel and a sample generated via a project participant list. We also make recommendations for research funders, journal editors, and policymakers using survey-based science, who can all play a role in assuring that high-quality survey data are used to inform effective conservation programs and policies.
... Gao et al. (2016b) use a single trap question to identify inattentive respondents and find that younger and low income respondents are more likely to fail the trap question, and suggest over-sampling this group to increase overall data quality. This finding is in line with Anduiza and Galais (2016) who find that respondents who pass and fail IMCs are significantly different on key demographic characteristics, and that respondents who fail are often younger, less educated and less motivated by the topic of the survey, which exacerbates their concern that eliminating these respondents could increase sample bias. Running separate models on respondents who fail and pass the trap question, Gao et al. (2016b) find that the model for the group that passed fit the data better and that the estimated willingness-to-pay had smaller variation. ...
Article
Full-text available
Stated preference practitioners are increasingly relying on internet panels to gather data, but emerging evidence suggests potential limitations with respect to respondent and response quality in such panels. We identify groups of inattentive respondents who have failed to watch information videos provided in the survey to completion. Our results show that inattentive respondents have a higher cost sensitivity, are more likely to have a small scale parameter, and are more likely to ignore the non-cost attributes. These results are largely driven by respondents failing to watch the information video about the discrete choice experiment, attributes and levels, which underlines the importance of information provision and highlights possible implications of inattentiveness. We develop a modeling framework to simultaneously address preference, scale and attribute processing heterogeneity. We find that when we consider attribute non-attendance—scale differences disappear, which suggests that the type of heterogeneity detected in a model could be the result of un-modeled heterogeneity of a different kind. We discuss implications of our results.
... To bring the text closer to the participant, thus potentially augmenting the effect of the manipulation, the story took place in the state they were from, and in particular the second largest city in that state.To screen for satisficers, two attention checks were included which asked where the protagonist in the vignette had been spending his time before his encounter with the police, and what the police officers were looking for. In addition, an instructional manipulation check(Anduiza and Galais 2016;Oppenheimer, Meyvis, and Davidenko 2009) was also included, which requested the participants not to select a response to a particular item. Failure of either of these checks meant the end of the respondent's participation. ...
Thesis
This thesis makes a theoretical and a methodological contribution. Theoretically, it tests certain predictions of procedural justice policing, which posits that neutral, fair, and respectful treatment by the police is the cornerstone of fruitful police-public relations, in that procedural justice leads to increased police legitimacy, and that legitimacy engenders societally desirable outcomes, such as citizens’ willingness to cooperate with the police and compliance with the law. Methodologically, it identifies and assesses causal mechanisms using a family of methods developed mostly in the field of epidemiology: causal mediation analysis. The theoretical and methodological aspects of this thesis converge in the investigation of (1) the extent to which procedural justice mediates the impact of contact with the police on police legitimacy and psychological processes (Paper 1), (2) the mediating role of police legitimacy on willingness to cooperate with the police and compliance with the law (Paper 3, Paper 4), and (3) the psychological drivers that channel the impact of procedural justice on police and legal legitimacy (Paper 2). This thesis makes use of a randomised controlled trial (Scottish Community Engagement Trial), four randomised experiments, and one experiment with parallel (encouragement) design on crowdsourced samples from the US and the UK (recruited through Amazon Turk and Prolific Academic). The causal evidence attests to the centrality of procedural justice, which mediates the impact of an encounter with the police on police legitimacy, and influences psychological processes and police legitimacy. Personal sense of power, not social identity, is the causal mediator of the effect of procedural justice on police and legal legitimacy. Finally, different aspects of legitimacy transmit the influence of procedural justice on distinct outcomes, with duty to obey affecting legal compliance and normative alignment affecting willingness to cooperate. In sum, most of the causal evidence is congruent with the theory of procedural justice.
... Thus, though they may measure attentiveness to the survey in general, IMCs do not measure attentiveness to stimuli contained within the experimental portion of the survey. This is especially problematic insofar as respondents' levels of attentiveness vary throughout the course of a study (Anduiza and Galais 2016) or in surveys with many distractions (Clifford and Jerit 2014); thus, attentiveness to any given IMC may be a poor indicator of attentiveness to the experimental manipulations. Second, individuals who regularly complete surveys online (e.g., MTurk "Workers") may become exceptionally "savvy" at spotting IMCs (see Krupnikov and Levine 2014;Hauser and Schwarz 2016) and/or share such information with other users (Chandler, Mueller, and Paolacci 2014). ...
Article
Full-text available
Manipulation checks are often advisable in experimental studies, yet they rarely appear in practice. This lack of usage may stem from fears of distorting treatment effects and uncertainty regarding which type to use (e.g., instructional manipulation checks [IMCs] or assessments of whether stimuli alter a latent independent variable of interest). Here, we first categorize the main variants and argue that factual manipulation checks (FMCs)—that is, objective questions about key elements of the experiment—can identify individual‐level attentiveness to experimental information and, as a consequence, better enable researchers to diagnose experimental findings. We then find, through four replication studies, little evidence that FMC placement affects treatment effects, and that placing FMCs immediately post‐outcome does not attenuate FMC passage rates. Additionally, FMC and IMC passage rates are only weakly related, suggesting that each technique identifies different sets of attentive subjects. Thus, unlike other methods, FMCs can confirm attentiveness to experimental protocols.
... Some researchers have argued that the impact of measurement error caused by inattentive responding is negligible due to low prevalence (e.g. Anduiza & Galais, 2016;Berinsky, Margolis, & Sances, 2014). Others see inattentive responding as a major problem that affects the quality of survey results (e.g. ...
Article
This study aims to assess whether respondent inattentiveness causes systematic and unsystematic measurement error that influences survey data quality. To determine the impact of (in)attentiveness on the reliability and validity of target measures, we compared respondents from a German online survey (N = 5205) who had passed two attention checks with those who had failed. Our results show that inattentiveness induces both random and systematic measurement error, which impacts estimates of the reliability and validity of multi-item scales. In addition, we conducted a sensitivity analysis, which revealed that the impact of inattentiveness on analyses can be substantial.
... More recently, Anduiza and Galais (2017) found that failing a trap question in an early wave of a longitudinal study was positively correlated with failing again in subsequent waves. Also, respondents who were more interested in the survey topic were less likely to fail the trap questions. ...
Article
This study examines the use of trap questions as indicators of data quality in online surveys. Trap questions are intended to identify respondents who are not paying close attention to survey questions, which would mean that they are providing sub-optimal responses to not only the trap question itself but to other questions included in the survey. We conducted three experiments using an online non-probability panel. In the first experiment, we examine whether there is any difference in responses to surveys with one trap question as those that have two trap questions. In the second study, we examine responses to surveys with trap questions of varying difficulty. In the third experiment, we test the level of difficulty, the placement of the trap question, and other forms of attention checks. In all studies, we correlate the responses to the trap question(s) with other data quality checks, most of which were derived from the literature on satisficing. Also, we compare the responses to several substance questions by the response to the trap questions. This would tell us whether participants who failed the trap questions gave consistently different answers from those who passed the trap questions. We find that the rate of passing/failing various trap questions varies widely, from 27% to 87% among the types we tested. We also find evidence that some types of trap questions are more significantly correlated with other data quality measures.
... Geçmiş araştırmalar "talimatsal manipülasyon kontrolü (instructional manipulation check)" ve "sahte ifade (bogus-item)" yöntemlerinden faydalanarak dikkatsiz anket ve/veya deney cevaplayıcılarının oranlarına ilişkin keşifsel tespitlerde bulunmuşlardır. Bu keşifsel araştırmalar neticesinde dikkatsiz cevaplayıcıların oranları; %3.3-%59.1 arası (Anduiza ve Galais, 2017), %8-%11 arası (Hauser ve Schwarz, 2015), %63 (Berinsky, Margolis ve Sances, 2014), %46 (Oppenheimer, Davis ve Davidenko, 2009) ve %11 ( Meade ve Craig, 2012) olarak tespit edilmiştir. Bu keşifsel bulgular, anket aracılığıyla toplanan verilerin kalitesinin dikkatsiz cevaplayıcıların varlığından dolayı tehdit altında olduğunu göstermektedir. ...
... Broadly speaking, careless responding is an answering survey items without reading them ( Anduiza & Galais, 2017). Lack of reading items is not the only condition for careless responding. ...
Chapter
The current paper introduces a novel method for detecting careless respondents, namely, floodlight detection of careless respondents. This novel method is composed of two steps: (1) creating a nonsense regression model and then (2) testing a moderator role of response time on the nonsense regression model with Johnson-Neyman technique.
Article
Full-text available
Participants that complete online surveys and experiments may be inattentive, which can hinder researchers’ abilityto draw substantive or causal inferences. As such, many practitioners include multiple factualor instructional closed-ended manipulation checks to identify low-attention respondents. However, closed-ended manipulation checks are either correct or incorrect, which allows participants to more easily guess and it reduces the potential variation in attention between respondents. In response to these shortcomings, I develop an automatic and standardized methodology to measure attention that relies on the text that respondents provide in an open-ended manipulation check. There are multiple benefits to this approach. First, it provides a continuous measure of attention, which allows for greater variation between respondents. Second, it reduces the reliance on subjective, paid humans to analyze open-ended responses. Last, I outline how to diagnose the impact of inattentive workers on the overall results, including how to assess the average treatment effect of those respondents that likely received the treatment. I provide easy-to-use software in R to implement these suggestions for open-ended manipulation checks.
Article
Guns are highly visible in the news, in politics, and in American culture more broadly. While most Americans support some gun control, a significant and vocal minority of Americans are firmly opposed. Drawing on work from the recently developing sociology of modern gun culture, we propose an intersectional threat model—wherein perceived threats to multiple privileged identities provoke a distinct response—for understanding the positions Americans take on gun policies. Using data from a 2018 national survey conducted by the American National Election Survey, we find a robust role for perceived threats along gender, race, and citizenship lines in opposition to background checks for private sales and an assault weapons ban as well as support for arming teachers. Interactions reveal multiplicative effects: that gender threats matter more when racial and immigrant threats are also felt. We discuss implications for the prospect of policy and for understanding the pro-gun alt-right movement and other potential applications of intersectional threat.
Article
Internet-based surveys have expanded public opinion data collection at the expense of monitoring respondent attentiveness, potentially compromising data quality. Researchers now have to evaluate attentiveness ex-post. We propose a new proxy for attentiveness— response-time attentiveness clustering (RTAC)—that uses dimension reduction and an unsupervised clustering algorithm to leverage variation in response time between respondents and across questions. We advance the literature theoretically arguing that the existing dichotomous classification of respondents as fast or attentive is insufficient and neglects slow and inattentive respondents. We validate our theoretical classification and empirical strategy against commonly used proxies for survey attentiveness. In contrast to other methods for capturing attentiveness, RTAC allows researchers to collect attentiveness data unobtrusively without sacrificing space on the survey instrument.
Article
The presence of satisficers among survey respondents threatens survey data quality. To identify such respondents, Oppenheimer et al. developed the Instructional Manipulation Check (IMC), which has been used as a tool to exclude observations from the analyses. However, this practice has raised concerns regarding its effects on the external validity and the substantive conclusions of studies excluding respondents who fail an IMC. Thus, more research on the differences between respondents who pass versus fail an IMC regarding sociodemographic and attitudinal variables is needed. This study compares respondents who passed versus failed an IMC both for descriptive and causal analyses based on structural equation modeling (SEM) using data from an online survey implemented in Spain in 2019. These data were analyzed by Rubio Juan and Revilla without taking into account the results of the IMC. We find that those who passed the IMC do differ significantly from those who failed for two sociodemographic and five attitudinal variables, out of 18 variables compared. Moreover, in terms of substantive conclusions, differences between those who passed and failed the IMC vary depending on the specific variables under study.
Article
Full-text available
Zusammenfassung Bilder gewinnen in der Online-Kommunikation von Organisationen an Bedeutung, werden jedoch in ihrem Kontext rezipiert. In einem Online-Experiment wurde daher getestet, inwieweit inhaltlich passende multimodale Organisationskommunikation die wahrgenommene Glaubwürdigkeit erhöht und Täuschung mindert und ob sie dadurch die Absicht, sich zivilgesellschaftlich zu engagieren, beeinflusst. In einem 2 (Regierungsorganisation vs. zivilgesellschaftliche Organisation) × 3 (kongruent vs. inkongruent vs. textbasiert)-Between-Subjects-Experiment mit Teilnehmern des SoSci-Panels ( N = 406) wurden die Internetseiten der Bertelsmann Stiftung und des Bundesinnenministeriums zu Migration und Integration manipuliert und dabei der Einfluss von Bedrohungswahrnehmungen durch Migration einbezogen. Die Ergebnisse zeigen, dass kongruente Text-Bild-Sprache positiv auf Handlungsabsichten wirkt, weil sie als glaubwürdig wahrgenommen wird. Wenn die gefühlte Bedrohung durch Zuwanderung hoch ist, fällt der positive Wahrnehmungseffekt geringer aus. Personen sind eher geneigt zu handeln, wenn die Kommunikation von einer zivilgesellschaftlichen Organisation stammt. Dieser Effekt hängt mit der unterschiedlichen Daseinsberechtigung von Regierungs- und zivilgesellschaftlichen Organisationen zusammen. Die Ergebnisse unterstreichen die Bedeutung glaubwürdiger Kommunikation für eine funktionierende Demokratie.
Article
Does attentiveness matter in survey responses? Do more attentive survey participants give higher quality responses? Using data from a recent online survey that identified inattentive respondents using instructed-response items, we demonstrate that ignoring attentiveness provides a biased portrait of the distribution of critical political attitudes and behavior. We show that this bias occurs in the context of both typical closed-ended questions and in list experiments. Inattentive respondents are common and are more prevalent among the young and less educated. Those who do not pass the trap questions interact with the survey instrument in distinctive ways: they take less time to respond; are more likely to report nonattitudes; and display lower consistency in their reported choices. Inattentiveness does not occur completely at random and failing to properly account for it may lead to inaccurate estimates of the prevalence of key political attitudes and behaviors, of both sensitive and more prosaic nature.
Article
Identifying inattentive respondents in self-administered surveys is a challenging goal for survey researchers. Instructed response items (IRIs) provide a measure for inattentiveness in grid questions that is easy to implement. The present article adds to the sparse research on the use and implementation of attention checks by addressing three research objectives. In a first study, we provide evidence that IRIs identify respondents who show an elevated use of straightlining, speeding, item nonresponse, inconsistent answers, and implausible statements throughout a survey. Excluding inattentive respondents, however, did not alter the results of substantive analyses. Our second study suggests that respondents’ inattentiveness partially changes as the context in which they complete the survey changes. In a third study, we present experimental evidence that a mere exposure to an IRI does not negatively or positively affect response behavior within a survey. A critical discussion on using IRI attention checks concludes this article.
Article
What are the determinants of public opinion on foreign policy measures against autocracies? Does public support differ depending on the type of autocracy? In this paper, we design an original survey experiment in order to shed light on these questions. Specifically, we examine to see if the public support for a military intervention, economic sanctions and diplomatic pressure are influenced by the oil wealth of the autocracy. Findings show that US citizens in the experiments are systematically more supportive of imposing sanctions and diplomatic pressure on an oil-exporting autocracy than on non-oil one. However, when exposed to potential energy costs of these measures, respondents are deterred from approving of them.
Article
Full-text available
Panel Data offers a unique opportunity to identify data that interviewers clearly faked by comparing data waves. In the German Socio-Economic Panel (SOEP), only 0.5% of all records of raw data have been detected as faked. These fakes are used here to analyze the potential impact of fakes on survey results. Due to our central finding the faked records have no impact on the mean or the proportions. However, we show that there may be a serious bias in tbe estimation of correlations and regression coefficients. In all but one year (1998), the detected faked data have never been disseminated within the widely-used SOEP study. The fakes are removecl prior to dlata release.
Article
Full-text available
Increasingly colleges and universities use survey results to make decisions, inform research, and shape public opinion. Given the large number of surveys distributed on campuses, can researchers reasonably expect that busy respondents will diligently answer each and every question? Less serious respondents may ‘‘satisfice,’’ i.e., take shortcuts to conserve effort, in a number of ways—choosing the same response every time, skipping items, rushing through the instrument, or quitting early. In this paper we apply this satisficing framework to demonstrate analytic options for assessing respondents’ conscientiousness in giving high fidelity survey answers. Specifically, we operationalize satisficing as a series of measurable behaviors and compute a satisficing index for each survey respondent. Using data from two surveys administered in university contexts, we find that the majority of respondents engaged in satisficing behaviors, that single-item results can be significantly impacted by satisficing, and that scale reliabilities and correlations can be altered by satisficing behaviors. We conclude with a discussion of the importance of identifying satisficers in routine survey analysis in order to verify data quality prior to using results for decision-making, research, or public dissemination of findings.
Article
Full-text available
We begin this article with the assumption that attitudes are best understood as structures in long-term memory, and we look at the implications of this view for the response process in attitude surveys. More specifically, we assert that an answer to an attitude question is the product of a four-stage process. Respondents first interpret the attitude question, determining what attitude the question is about. They then retrieve relevant beliefs and feelings. Next, they apply these beliefs and feelings in rendering the appropriate judgment. Finally, they use this judgment to select a response. All four of the component processes can be affected by prior items. The prior items can provide a framework for interpreting later questions and can also make some responses appear to be redundant with earlier answers. The prior items can prime some beliefs, making them more accessible to the retrieval process. The prior items can suggest a norm or standard of comparison for making the judgment. Finally, the prior items can create consistency pressures or pressures to appear moderate. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
A response tendency resulting from the length of a group-administered questionnaire instrument is described. Respondents answering items that are included in large sets toward the later parts of a long questionnaire are more likely to give identical answers to most or all of the items, compared with those responding to items in smaller sets or in shorter questionnaires. While means and intercorrelations among items within the same set are affected by this “straight-line” response pattern, intercorrelations between items from different sets are much less affected by it. These investigations are based on comparisons between a long questionnaire, administered to 1,050 high school seniors in nine high schools across the nation in 1978, and five shorter questionnaires administered to large national samples of high school seniors.
Article
Full-text available
Survey researchers since Cannell have worried that respondents may take various shortcuts to reduce the effort needed to complete a survey. The evidence for such shortcuts is often indirect. For instance, preferences for earlier versus later response options have been interpreted as evidence that respondents do not read beyond the first few options. This is really only a hypothesis, however, that is not supported by direct evidence regarding the allocation of respondent attention. In the current study, we used a new method to more directly observe what respondents do and do not look at by recording their eye movements while they answered questions in a Web survey. The eye-tracking data indicate that respondents do in fact spend more time looking at the first few options in a list of response options than those at the end of the list; this helps explain their tendency to select the options presented first regardless of their content. In addition, the eye-tracking data reveal that respondents are reluctant to invest effort in reading definitions of survey concepts that are only a mouse click away or paying attention to initially hidden response options. It is clear from the eye-tracking data that some respondents are more prone to these and other cognitive shortcuts than others, providing relatively direct evidence for what had been suspected based on more conventional measures.
Article
Full-text available
Previous research has documented effects of the order in which response choices are offered to respondents using closed-ended survey items, but no theory of the psychological sources of these effects has yet been proposed. This paper offers such a theory drawn from a variety of psychological research. Using data from a split-ballot experiment in the 1984 General Social Survey involving a variant of Kohn's parental values measure, we test some predictions made by the theory about what kind of response order effect would be expected (a primacy effect) and among which respondents it should be strongest (those low in cognitive sophistication). These predictions are confirmed. We also test the “form-resistant correlation” hypothesis. Although correlations between items are altered by changes m response order, the presence and nature of the latent value dimension underlying these responses is essentially unaffected.
Article
Full-text available
This study investigates the effect of item and person characteristics on item nonresponse, for written questionnaires used with school children. Secondary analyses were done on questionnaire data collected in five distinct studies. To analyze the data, logistic multilevel analysis was used with the items at the lowest and the children at the highest level. Item nonresponse turns out to be relatively rare. Item nonresponse can be predicted by some of the item and person characteristics in our study. However, the predicted response differences are small. There are interactions between item and person characteristics, especially with the number of years of education, which is used as a proxy indicator for cognitive skill. Young children do not perform as well as children with more years of education, by producing more item nonresponse, but their performance is still acceptable.
Article
It is well-documented that there exists a pool of frequent survey takers who participate in many different online nonprobability panels in order to earn cash or other incentives--so-called 'professional' respondents. Despite widespread concern about the impact of these professional respondents on data quality, there is not a clear understanding of how they might differ from other respondents. This chapter reviews the previous research and expectations regarding professional respondents and then examines how frequent survey taking and multiple panel participation affects data quality in the 2010 Cooperative Congressional Election Study. In contrast to common assumptions, we do not find overwhelming and consistent evidence that frequent survey takers are more likely to satisfice. On the contrary, frequent survey takers spent more time completing the questionnaire, were less likely to attrite, were less likely to straightline, and reported putting more effort into answering the survey. While panel memberships and number of surveys completed were related to skipping questions, answering "don't know," or giving junk responses to open-ended questions, these relationships did not hold once we account for levels of political knowledge. However, our analysis finds that higher levels of participation in surveys and online panels are associated with lower levels of political knowledge, interest, engagement, and ideological extremism. These findings suggest there could be contrasting motivations for those volunteering to participate in nonprobability panel surveys, with professional respondents taking part for the incentives and nonprofessional respondents taking part based on interest in the survey topic. As such, eliminating professional respondents from survey estimates, as some have recommended, would actually result in a more biased estimate of political outcomes.
Book
Examines the psychological processes involved in answering different types of survey questions. The book proposes a theory about how respondents answer questions in surveys, reviews the relevant psychological and survey literatures, and traces out the implications of the theories and findings for survey practice. Individual chapters cover the comprehension of questions, recall of autobiographical memories, event dating, questions about behavioral frequency, retrieval and judgment for attitude questions, the translation of judgments into responses, special processes relevant to the questions about sensitive topics, and models of data collection. The text is intended for: (1) social psychologists, political scientists, and others who study public opinion or who use data from public opinion surveys; (2) cognitive psychologists and other researchers who are interested in everyday memory and judgment processes; and (3) survey researchers, methodologists, and statisticians who are involved in designing and carrying out surveys. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
In a national field experiment, the same questionnaires were administered simultaneously by RDD telephone interviewing, by the Internet with a probability sample, and by the Internet with a nonprobability sample of people who volunteered to do surveys for money. The probability samples were more representative of the nation than the nonprobability sample in terms of demographics and electoral participation, even after weighting. The nonprobability sample was biased toward being highly engaged in and knowledgeable about the survey's topic (politics). The telephone data manifested more random measurement error, more survey satisficing, and more social desirability response bias than did the Internet data, and the probability Internet sample manifested more random error and satisficing than did the volunteer Internet sample. Practice at completing surveys increased reporting accuracy among the probability Internet sample, and deciding only to do surveys on topics of personal interest enhanced reporting accuracy in the nonprobability Internet sample. Thus, the nonprobability Internet method yielded the most accurate self-reports from the most biased sample, while the probability Internet sample manifested the optimal combination of sample composition accuracy and self-report accuracy. These results suggest that Internet data collection from a probability sample yields more accurate results than do telephone interviewing and Internet data collection from nonprobability samples.
Article
Investigation of the underlying mechanisms responsible for measurement variance has received little attention. The primary objective of this study is to examine whether paper and social media surveys produce convergent results and investigate the underlying psychological mechanisms for the potential measurement nonequivalence. Particularly, we explored the role of social desirability and satisficing on the measurement results. We collected data via five different survey modes, including paper survey, ad hoc Web survey, online forum (message boards)-based, SNS-based and microblog-based surveys. The findings show that socially desirable responding does not lead to inconsistent results. Rather we found that satisficing causes inconsistent results in paper versus online surveys. Sociability reduces the possibility of engaging in satisficing that results in inconsistent results between traditional Web surveys and social media-based Web surveys.
Book
1. Introduction 2. Respondents' understanding of survey questions 3. The role of memory in survey responding 4. Answering questions about date and durations 5. Attitude questions 6. Factual judgments and numerical estimates 7. Attitude judgments and context effects 8. Mapping and formatting 9. Survey reporting of sensitive topics 10. Mode of data collection 11. Impact of the application of cognitive models to survey measurement.
Article
Good survey and experimental research requires subjects to pay attention to questions and treatments, but many subjects do not. In this article, we discuss “Screeners” as a potential solution to this problem. We first demonstrate Screeners’ power to reveal inattentive respondents and reduce noise. We then examine important but understudied questions about Screeners. We show that using a single Screener is not the most effective way to improve data quality. Instead, we recommend using multiple items to measure attention. We also show that Screener passage correlates with politically relevant characteristics, which limits the generalizability of studies that exclude failers. We conclude that attention is best measured using multiple Screener questions and that studies using Screeners can balance the goals of internal and external validity by presenting results conditional on different levels of attention.
Article
This article is intended to supplement rather than replace earlier reviews of research on survey incentives, especially those by Singer (2002); Singer and Kulka (2002); and Cantor, O’Hare, and O’Connor (2008). It is based on a systematic review of articles appearing since 2002 in major journals, supplemented by searches of the Proceedings of the American Statistical Association’s Section on Survey Methodology for unpublished papers. The article begins by drawing on responses to open-ended questions about why people are willing to participate in a hypothetical survey. It then lays out the theoretical justification for using monetary incentives and the conditions under which they are hypothesized to be particularly effective. Finally, it summarizes research on how incentives affect response rates in cross-sectional and longitudinal studies and, to the extent information is available, how they affect response quality, nonresponse error, and cost-effectiveness. A special section on incentives in Web surveys is included.
Article
This paper explores some of the issues surrounding the use of internet-base d methodologies, in particular the extent to which data from an online survey can be matched to data from a face-to-face survey. Some hypotheses about what causes differences in data from online panel surveys and nationally representative face-to-face surveys are discussed. These include: interviewer effect and social desirability bias in face-to-face methodologies; the mode effects of online and face-to-face survey methodologies, including how response scales are used; and differences in the profile of online panellists - both demographic and attitudinal. Parallel surveys were conducted using online panel and face-to-face (CAPI) methodologies, and data were compared before weighting, following demographic weighting and following 'propensity score weighting' - a technique developed by Harris Interactive to correct for attitudinal differences typically found in online respondents. This paper looks at the differences in data from online and face-to-face surveys and puts forward some theories about why these differences might exist. The varying degrees of success of the weighting are also examined.
Article
A meta-analysis of split-ballots conducted by the Gallup Organization in the 1930s, 1940s, and early 1950s shows that response-order effects were generally small in magnitude when averaged across a great variety of topics and questions—and as compared with many of those reported in the response-effects literature today. When analyzed by various question characteristics, the results provided some support for predictions derived from current cognitive models of response-order effects, particularly those from satisficing theory. As predicted, questions asked orally were more likely to generate a statistically significant pattern of recency effects if the response alternatives or the questions as a whole were longer rather than shorter. Other predicted patterns of primacy and recency effects failed to materialize, however, perhaps largely because of the inherent design limitations and partial confounding of question attributes in any such secondary analysis of archival survey data, but perhaps, too, because of simple chance variations. The data from these early experiments nonetheless provide a partial, albeit limited, test of rival hypotheses and explanations of response-order effects in the literature.
Article
Behavior coding is one technique researchers use to detect problems in survey questions, but it has been primarily explored as a practical tool rather than a source of insight into the theoretical understanding of the cognitive processes by which respondents answer survey questions. The latter is the focus of the current investigation. Using data from a large study in which face-to-face interviews were taped and extensive behavior coding was done, we tested whether sets of respondent behavior codes could be used to distinguish respondent difficulties with comprehension of the question from difficulties associated with mapping a judgment onto the response format provided, and whether characteristics of the survey questions and respondents could be used to predict when and for whom such difficulties would occur. Sets of behavior codes were identified that reflected comprehension and mapping difficulties, and these two types of difficulties were associated with different question and respondent characteristics. This evidence suggests that behavior coding shows promise as a tool for researchers studying the cognitive processes involved in answering survey questions.
Article
A new theoretical perspective proposes that various survey response patterns occur partly because respondents shortcut the cognitive processes necessary for generating optimal answers and that these shortcuts are directed by cues in the questions.
Article
For over 35 years, a random sample of U.S. women has responded for free to a government survey that tracks their socioeconomic development. In 2003 an experiment was run to understand if providing monetary incentives of up to $40 would impact participation rates. Providing incentives to respondents, who previously refused to participate in the last survey round, significantly boosted response rates, and resulted in longer interviews and more items answered. However, providing monetary incentives to previously willing respondents showed a mixed impact on response rates, interview times, and items answered.
Article
This paper proposes that when optimally answering a survey question would require substantial cognitive effort, some repondents simply provide a satisfactory answer instead. This behaviour, called satisficing, can take the form of either (1) incomplete or biased information retrieval and/or information integration, or (2) no information retrieval or integration at all. Satisficing may lead respondents to employ a variety of response strategies, including choosing the first response alternative that seems to constitute a reasonable answer, agreeing with an assertion made by a question, endorsing the status quo instead of endorsing social change, failing to differentiate among a set of diverse objects in ratings, saying ‘don't know’ instead of reporting an opinion, and randomly choosing among the response alternatives offered. This paper specifies a wide range of factors that are likely to encourage satisficing, and reviews relevant evidence evaluating these speculations. Many useful directions for future research are suggested.
Article
Although many studies have addressed the issue of response quality in survey studies, few have looked specifically at low-quality survey responses in surveys of college students. As students receive more and more survey requests, it is inevitable that some of them will provide low-quality responses to important campus surveys and institutional accountability measures. This study proposes a strategy for uncovering low-quality survey responses and describes how they may affect intercampus accountability measures. The results show that survey response quality does have an effect on intercampus accountability measures, and that certain individual and circumstantial factors may increase the likelihood of low-quality responses. Implications for researchers and higher education administrators are discussed. KeywordsSurvey nonresponse–Accountability–Satisficing–Response quality–Institutional research–Survey method
Article
Participants are not always as diligent in reading and following instructions as experimenters would like them to be. When participants fail to follow instructions, this increases noise and decreases the validity of their data. This paper presents and validates a new tool for detecting participants who are not following instructions – the Instructional manipulation check (IMC). We demonstrate how the inclusion of an IMC can increase statistical power and reliability of a dataset.
Article
The use of the World Wide Web to conduct surveys has grown rapidly over the past decade, raising concerns regarding data quality, questionnaire design, and sample representativeness. This research note focuses on an issue that has not yet been studied: Are respondents who complete self-administered Web surveys more quickly—perhaps taking advantage of participation benefits while minimizing effort—also more prone to response order effects, a manifestation of “satisficing”? I surveyed a random sample of the US adult population over the Web and manipulated the order in which respondents saw the response options. I then assessed whether primacy effects were moderated by the overall length of time respondents took to complete the questionnaires. I found that low-education respondents who filled out the questionnaire most quickly were most prone to primacy effects when completing items with unipolar rating scales. These results have important implications for various aspects of Web survey methodology including panel management, human–computer interaction, and response order randomization.
Article
In this paper I have attempted to identify some of the structural characteristics that are typical of the "psychological' environments of organisms. We have seen that an organism in an environment with these characteristics requires only very simple perceptual and choice mechanisms to satisfy its several needs and to assure a high probability of its survival over extended periods of time. In particular, no "utility function' needs to be postulated for the organism, nor does it require any elaborate procedure for calculating marginal rates of substitution among different wants.
Attribute non-attendance and satisficing behavior in online choice experiments
  • M S Jones
  • L A House
  • Z Gao
Jones, M. S., House, L. A., & Gao, Z. (2015). Attribute non-attendance and satisficing behavior in online choice experiments. Proceedings in Food System Dynamics, 415-432. ISSN 2194-511X
Falling through the net: Toward digital inclusion
  • G L Rohde
  • R Shapiro
Rohde, G. L., & Shapiro, R. (2000). Falling through the net: Toward digital inclusion. Washington, DC: U.S. Department of Commerce, Economics and Statistics Administration and National Telecommunications and Information Administration.
Innovation in online research - who needs online panels
  • P Comley
Comley, P. (2003). Innovation in online research -who needs online panels? In MRS Research Conference Paper (Vol. 36, pp. 615-639). Warrendale, Pennsilvania.
Preventing satisficing in online surveys. A ‘kapcha’ to ensure higher quality data
  • A Kapelner
  • D Chandler
Kapelner, A., & Chandler, D. (2010). Preventing satisficing in online surveys. A 'kapcha' to ensure higher quality data. In The World's First Conference on the Future of Distributed Work Proceedings (CrowdConf2010), San Francisco, CA.
Respondent technology preferences
  • C Miller
Miller, C. (2009). Respondent technology preferences. In CASRO Technology Conference, New York, NY.
Satisficing behavior in online panelists
  • T Downes-Le Guin
Downes-Le Guin, T. (2005, June). Satisficing behavior in online panelists. In MRA Annual Conference & Symposium, Chicago, IL.
Department of Commerce, Economics and Statistics Administration and National Telecommunications and Information Administration
  • G L Rohde
  • R Shapiro