Article

SurveyMotion: what can we learn from sensor data about respondents’ completion and response behavior in mobile web surveys?

If you want to read the PDF, try requesting it from the authors.

Abstract

Participation in web surveys via smartphones increased continuously in recent years. The reasons for this increase are a growing proportion of smartphone owners and an increase in mobile Internet access. However, research has shown that smartphone respondents are frequently distracted and/or multitasking, which might affect completion and response behavior in a negative way. We propose ‘SurveyMotion (SMotion)’, a JavaScript-based tool for mobile devices that can gather information about respondents’ motions during web survey completion by using sensor data. Specifically, we collect data about the total acceleration (TA) of smartphones. We conducted a lab experiment and varied the form of survey completion (e.g. standing or walking). Furthermore, we employed questions with different response formats (e.g. radio buttons and sliders) and measured response times. The results reveal that SMotion detects higher TAs of smartphones for respondents with comparatively higher motion levels. In addition, respondents’ motion level affects response times and the quality of responses given. The SMotion tool promotes the exploration of how respondents complete mobile web surveys and could be employed to understand how future mobile web surveys are completed. © 2019

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this contribution, we introduce and outline the program codes of the following five new data types: 1) acceleration data (SurveyMotion with and without gravity; see Höhne, Revilla, & Schlosser, 2020;Höhne & Schlosser, 2019), 2) compass data, 3) Global Positioning System (GPS) data, 4) gyroscope data, and 5) swiping. ...
... SurveyMotion (SMotion) measures the total acceleration of mobile devices with and without gravity (see Höhne et al., 2020;Höhne & Schlosser, 2019) to explore completion behavior in mobile web surveys. Total acceleration is defined as follows: ...
... For a more detailed description of SMotion with and without gravity, we refer interested readers to Höhne et al. (2020) and Höhne and Schlosser (2019). ...
Article
Full-text available
Embedded Client Side Paradata (ECSP) is a tool that is licensed under the Creative Commons Attribution 4.0 International License (see http://creativecommons.org/licenses/by/4.0/). It is based on different program languages, such as JavaScript and HTML. In general, ECSP can be implemented in web-based survey software solutions that provide access to the source code. It enables researchers to passively collect different kinds of client-side paradata, such as response times and scrolling events, and data from built-in sensors, such as Global Positioning System (GPS) and acceleration data. This is irrespective of the Internet browser and operating system used and allows researchers to investigate respondents’ completion behavior with respect to web surveys. Paradata and sensor data are collected at the page-level and are stored together with the actual survey data (i.e., respondents’ answers) in the same dataset.
... Beyond the use of paradata to detect multitasking, Höhne and Schlosser (2019) and Toepoel and Lugtig (2015) suggest to passively collect sensor data in smartphone surveys to draw conclusions about respondents' completion conditions. Accelerometers measure the rate of change of velocity of an object over time, allowing researchers to unobtrusively record physiological states, such as movements (see Elhoushi et al., 2017;Harari et al., 2016;He et al., 2016;Höhne, Revilla, et al., 2020;Höhne & Schlosser, 2019;Toepoel & Lugtig, 2015). ...
... Beyond the use of paradata to detect multitasking, Höhne and Schlosser (2019) and Toepoel and Lugtig (2015) suggest to passively collect sensor data in smartphone surveys to draw conclusions about respondents' completion conditions. Accelerometers measure the rate of change of velocity of an object over time, allowing researchers to unobtrusively record physiological states, such as movements (see Elhoushi et al., 2017;Harari et al., 2016;He et al., 2016;Höhne, Revilla, et al., 2020;Höhne & Schlosser, 2019;Toepoel & Lugtig, 2015). If a person moves or walks, they are creating acceleration (He et al., 2016), which is detected by the smartphone that is commonly worn on the body (e.g., in the pocket). ...
... If a person moves or walks, they are creating acceleration (He et al., 2016), which is detected by the smartphone that is commonly worn on the body (e.g., in the pocket). This situation can be applied to respondents that have the smartphone in their hands and complete a web survey, referring to a "respondent-device" link (Höhne, Revilla, et al., 2020;Höhne & Schlosser, 2019). To put it differently, respondents' motions are detectable by the acceleration sensor of smartphones, allowing researchers to classify smartphone respondents on the basis of their motion levels (i.e., acceleration). ...
Article
Full-text available
This study utilizes acceleration data from smartphone sensors to predict motion conditions of smartphone respondents. Specifically, we predict whether respondents are moving or nonmoving on a survey page level to learn about distractions and the situational conditions under which respondents complete smartphone surveys. The predicted motion conditions allow us to (1) estimate the proportion of smartphone respondents who are moving during survey completion and (2) compare the response behavior of moving and nonmoving respondents. Our analytical strategy consists of two steps. First, we use data from a lab experiment that systematically varied motion conditions of smartphone respondents and train a prediction model that is able to accurately infer respondents’ motion conditions based on acceleration data. Second, we use the trained model to predict motion conditions of respondents in two cross-sectional surveys in order to compare response behavior of respondents with different motion conditions in a field setting. Our results indicate that active movement during survey completion is a relatively rare phenomenon, as only about 3%–4% of respondents were predicted as moving in both cross-sectional surveys. When comparing respondents based on their predicted motion conditions, we observe longer completion times of moving respondents. However, we observe little differences when comparing moving and nonmoving respondents with respect to indicators of superficial responding, indicating that moving during survey completion does not pose a severe threat to data quality.
... In addition, smartphones allow survey researchers for the passive collection of socalled sensor data-that is, data that are collected via a variety of built-in sensors, such as accelerometers, barometers, compass, Global Positioning System (GPS) trackers, and gyroscopes. Sensor data have the potential to complement survey responses (Höhne & Schlosser, 2019;Toepoel & Lugtig, 2015) by providing information about respondents' physiological states, such as altitude level, motion, geographic orientation, and speed (see Elhoushi, Georgy, Noureldin, & Korenberg, 2017;Harari et al., 2016;Höhne & Schlosser, 2019;Toepoel & Lugtig, 2015). Data from these sensors can be collected either by JavaScript functions implemented in web survey pages or by apps installed on the smartphone. ...
... In addition, smartphones allow survey researchers for the passive collection of socalled sensor data-that is, data that are collected via a variety of built-in sensors, such as accelerometers, barometers, compass, Global Positioning System (GPS) trackers, and gyroscopes. Sensor data have the potential to complement survey responses (Höhne & Schlosser, 2019;Toepoel & Lugtig, 2015) by providing information about respondents' physiological states, such as altitude level, motion, geographic orientation, and speed (see Elhoushi, Georgy, Noureldin, & Korenberg, 2017;Harari et al., 2016;Höhne & Schlosser, 2019;Toepoel & Lugtig, 2015). Data from these sensors can be collected either by JavaScript functions implemented in web survey pages or by apps installed on the smartphone. ...
... The current state of research on the usefulness and usability of the collection of sensor data in smartphone surveys is characterized by a few empirical studies that use acceleration data, GPS data, or other sensor-based data (see Bohte & Maat, 2009;Elevelt et al., 2018;Harari et al., 2016;Höhne & Schlosser, 2019;Stopher & Shen, 2011). Despite this small body of research, it seems that especially the collection of acceleration data of smartphones is a promising new way to investigate respondents' completion behavior and response quality in mobile web surveys (Höhne & Schlosser, 2019;Toepoel & Lugtig, 2015). ...
Article
The increased use of smartphones in web survey responding did not only raise new research questions but also fostered new ways to research survey completion behavior. Smartphones have many built-in sensors, such as accelerometers that measure acceleration (i.e., the rate of change of velocity of an object over time). Sensor data establish new research opportunities by providing information about physical completion conditions that, for instance, can affect response quality. In this study, we explore three research questions: (1) To what extent do respondents accept to comply with motion instructions? (2) What variables affect the acceleration of smartphones? (3) Do different motion levels affect response quality? We conducted a smartphone web survey experiment using the Netquest opt-in panel in Spain and asked respondents to stand at a fix point or walk around while answering five single questions. The results reveal high compliance with motion instructions, with compliance being higher in the standing than in the walking condition. We also discovered that several variables, such as the presence of third parties, increase the acceleration of smartphones. However, the quality of responses to the five single questions did not differ significantly between the motion conditions, a finding that is in line with previous research. Our findings provide new insights into how compliance changes with motion tasks and suggest that the collection of acceleration data is a feasible and fruitful way to explore survey completion behavior. The findings also indicate that refined research on the connection between motion levels and response quality is necessary.
... This is due to the widespread use of both the devices themselves and high-speed networks, and the habit of using mobile services, including in the field of medicine, for example, monitoring the health of the elderly [31], widespread introduction of online education during the pandemic [32]. Several studies [30,33] show that the parameter characterizing the dispersion of the influence of users is their movement; therefore, additional smartphone capabilities, such as the accelerometer [34], are used to detect movement. However, in addition to the results of these studies, it should be noted that distractions (music in the background, conversation, etc.) are typical in general for web surveys, so when testing the reaction speed, the user can bend down for a fallen object, straighten his clothes, etc. ...
... Existing studies note differences in response times when using different devices for psychological research using the web platforms [10][11][12][13][14][15][16]23,[28][29][30][31][32][33]47] discussed above. In this regard, the paper considered the issue of the possibility of reducing the influence of a certain category of devices on the reaction time. ...
Article
Full-text available
Web surveys are an integral part of the feedback of Internet services, a research tool for respondents, including in the field of health and psychology. Web technologies allow conducting research on large samples. For mental health, an important metric is reaction time in cognitive tests and in answering questions. The use of mobile devices such as smartphones and tablets has increased markedly in web surveys, so the impact of device types and operating systems needs to be investigated. This article proposes an architectural solution aimed at reducing the effect of device variability on the results of cognitive psychological experiments. An experiment was carried out to formulate the requirements for software and hardware. Three groups of 1000 respondents were considered, corresponding to three types of computers and operating systems: Mobile Device, Legacy PC, and Modern PC. The results obtained showed a slight bias in the estimates for each group. It is noticed that the error for a group of devices differs both upward and downward for various tasks in a psychological experiment. Thus, for cognitive tests, in which the reaction time is critical, an architectural solution was synthesized for conducting psychological research in a web browser. The proposed architectural solution considers the characteristics of the device used by participants to undergo research in the web platform and allows to restrict access from devices that do not meet the specified criteria.
... There is an increasing number of studies evaluating the usefulness and usability of acceleration data in smartphone surveys (18)(19)(20). For instance, Höhne et al. (19) investigated respondents' compliance with simple motion tasks, such as standing at a fixed point (as in a balance test) and walking around (as in a walking test), in a self-administered smartphone survey using acceleration data. ...
... These plots illustrate the total acceleration of respondents' smartphones while they were required to do squats for 1 min. Total acceleration values lower than 1 indicate no motion [see (18)] and, thus, non-compliance with the squat task. Following this notion, the plot on the left side indicates non-compliance, the plot in the middle indicates partial compliance, and the plot on the right side indicates full compliance. ...
Article
Full-text available
Digital health data that accompany data from traditional surveys are becoming increasingly important in health-related research. For instance, smartphones have many built-in sensors, such as accelerometers that measure acceleration so that they offer many new research possibilities. Such acceleration data can be used as a more objective supplement to health and physical fitness measures (or survey questions). In this study, we therefore investigate respondents' compliance with and performance on fitness tasks in self-administered smartphone surveys. For this purpose, we use data from a cross-sectional study as well as a lab study in which we asked respondents to do squats (knee bends). We also employed a variety of questions on respondents' health and fitness level and additionally collected high-frequency acceleration data. Our results reveal that observed compliance is higher than hypothetical compliance. Respondents gave mainly health-related reasons for non-compliance. Respondents' health status positively affects compliance propensities. Finally, the results show that acceleration data of smartphones can be used to validate the compliance with and performance on fitness tasks. These findings indicate that asking respondents to conduct fitness tasks in self-administered smartphone surveys is a feasible endeavor for collecting more objective data on physical fitness levels.
... In particular, smartphone sensors and apps allow researchers to collect new types of data, which can improve and expand survey measurement (Link et al., 2014), and offer the potential to reduce measurement errors, respondent burden and data collection costs (Jäckle et al., 2018). For example, GPS (McCool et al., 2021), accelerometers (Höhne & Schlosser, 2019;Höhne, Revilla, et al., 2020), web tracking applications and plug-ins (Bosch & Revilla, 2021bRevilla et al., 2017) and microphones (Gavras & Höhne, 2022;Revilla & Couper, 2021;Revilla et al., 2020), have already been used in (mobile) web survey research. ...
Article
Full-text available
Images might provide richer and more objective information than text answers to open‐ended survey questions. Little is known, nonetheless, about the consequences for data quality of asking participants to answer open‐ended questions with images. Therefore, this paper addresses three research questions: (1) What is the effect of answering web survey questions with images instead of text on breakoff, noncompliance with the task, completion time and question evaluation? (2) What is the effect of including a motivational message on these four aspects? (3) Does the impact of asking to answer with images instead of text vary across device types? To answer these questions, we implemented a 2 × 3 between‐subject web survey experiment (N = 3043) in Germany. Half of the sample was required to answer using PCs and the other half with smartphones. Within each device group, respondents were randomly assigned to (1) a control group answering open‐ended questions with text; (2) a treatment group answering open‐ended questions with images; and (3) another treatment group answering open‐ended questions with images but prompted with a motivational message. Results show that asking participants to answer with images significantly increases participants' likelihood of noncompliance as well as their completion times, while worsening their overall survey experience. Including motivational messages, moreover, moderately reduces the likelihood of noncompliance. Finally, the likelihood of noncompliance is similar across devices.
... Usually, a classifier model is built from patterns in the training data of these sensors that are labeled based on known activity states. New, unlabeled data are then classified based on the probability that an identified pattern pertains to a specific activity (Elhoushi et al., 2017;Höhne & Schlosser, 2019;Kern, Höhne, Schlosser, & Revilla, 2020;Mulder & Kieruj, 2019). ...
Article
Smartphones have become central to our daily lives and are often present in the same contexts as their users. Researchers take advantage of this phenomenon by using data from smartphone sensors to infer everyday activities, such as mobility, physical activity, and sleep. For example, that a person is sleeping might be inferred from the fact that their phone is idle and that there is no sound and light around the phone. The success of inference from raw smartphone sensor data to activity outcomes depends, among other factors, on how smartphone owners use their device. Not having the smartphone in close proximity throughout the day, turning the device off, or sharing the device with others can constitute barriers that interfere with accurately measuring everyday activity with data from the phone's native sensors. Against this background, we surveyed two independent, large-scale samples of German smartphone owners (n1 = 3956; n2 = 2525) on how they use their smartphones, with a focus on three everyday activities: mobility, physical activity, and sleep. We find that both sociodemographic as well as smartphone-related characteristics are associated with how people use their smartphones, and that this affects the suitability of smartphone data for measuring everyday activities.
... Furthermore, they are able to gather digital trace data about survey participants. Most smartphones provide hardware that allows, among other features, voice identification and geo-and eye-tracking (Höhne & Schlosser, 2019;Lane et al., 2010). However, substantial research and the development of theoretical frameworks on how survey data and digital trace data can be integrated sustainably have only recently gained the attention of survey methodologists, and thus, research on this matter is comparably scarce (for an overview, see Stier et al., 2019). ...
Article
Full-text available
Panel attrition poses major threats to the survey quality of panel studies. Many features have been introduced to keep panel attrition as low as possible. Based on a random sample of refugees, a highly mobile population, we investigate whether using a mobile phone application improves address quality and response behavior. Various features, including geo-tracking, collecting email addresses and adress changes, are tested. Additionally, we investigate respondent and interviewer effects on the consent to download the app and sharing GPS geo-positions. Our findings show that neither geo-tracking nor the provision of email addresses nor the collection of address changes through the app improves address quality substantially. We further show that interviewers play an important role in convincing the respondents to install and use the app, whereas respondent characteristics are largely insignificant. Our findings provide new insights into the usability of mobile phone applications and help determine whether they are a worthwhile tool to decrease panel attrition.
... JavaScript is especially useful for singletime measurement-for example, to collect respondents' current location or when respondents are asked to take a picture. An example of a JavaScript tool is SurveyMotion (Höhne & Schlosser, 2019), that collects respondents acceleration data during an online survey. ...
Article
Full-text available
In this paper we discuss the implications of using mobile devices for online survey completion. With more and more people accessing online surveys on mobile devices, online surveys need to be redesigned in order to be able to meet the characteristics of mobile device usage, such as small screens and short messaging. We discuss mobile friendly design by focussing on survey layout, the length of the survey, special features, and the decision of making the survey app or browser based. Further, we discuss the different sensors that can be used to augment or replace survey questions, and respondents’ willingness to share sensor data. We end with three examples of surveys conducted by Statistics Netherlands, where sensors are used for active and passive measurement in mobile surveys.
Chapter
In der Regel können Befragte bei der Teilnahme an Webbefragungen selbst wählen, mit welchem Endgerät sie an der Befragung teilnehmen möchten. Aufgrund des technischen Fortschritts und der zunehmenden Netzabdeckung entscheiden sich immer mehr Personen an Webbefragungen mittels Smartphones teilzunehmen, während in der Regel weiterhin die größere Zahl der Befragten mit ihrem PC antworten. Unsere Studie untersucht, auf der Basis von zwei Studierendenbefragungen, Unterschiede im Hinblick auf die Bearbeitungszeit sowie das Antwortverhalten zwischen PC und Smartphonenutzenden. Des Weiteren untersuchen wir Unterschiede zwischen optimiertem und nicht optimiertem visuellem Fragedesign. Unsere Untersuchung zeigt, dass Befragte auf Smartphones längere Bearbeitungszeiten als Befragte auf PCs aufweisen, dies aber kaum Auswirkungen auf das Antwortverhalten hat. Bei der Gegenüberstellung von optimiertem und nicht optimiertem Fragebogendesign konnte eine signifikant höhere Abbruchrate unter Verwendung eines Smartphones mit nicht optimiertem Design festgestellt werden. Die Ergebnisse unserer Studie deuten darauf hin, dass eine Datenerhebung mit Smartphones eine echte Alternative zu traditionellen Webbefragungen mittels PCs darstellt – gerade, wenn man in Betracht zieht, dass die Erhebung mit Smartphones auch die Möglichkeit einer Auswertung von zusätzlichen Daten (z. B. Sensor- oder GPS-Daten) bietet.
Article
Full-text available
Embedded Client Side Paradata (ECSP) is a tool that is licensed under the Creative Commons Attribution 4.0 International License (see http://creativecommons.org/licenses/by/4.0/). It is based on different program languages, such as JavaScript and HTML. In general, ECSP can be implemented in web-based survey software solutions that provide access to the source code. It enables researchers to passively collect different kinds of client-side paradata, such as response times and scrolling events – irrespective of the Internet browser and operating system used – to investigate respondents’ completion behavior with respect to web surveys. Paradata are collected at the page-level and are stored together with the actual survey data (i.e., respondents’ answers) in the same dataset.
Article
Full-text available
Recognition of the mode of motion or mode of transit of the user or platform carrying a device is needed in portable navigation, as well as other technological domains. An extensive survey on motion mode recognition approaches is provided in this survey paper. The survey compares and describes motion mode recognition approaches from different viewpoints: usability and convenience, types of devices in terms of setup mounting and data acquisition, various types of sensors used, signal processing methods employed, features extracted, and classification techniques. This paper ends with a quantitative comparison of the performance of motion mode recognition modules developed by researchers in different domains.
Chapter
Full-text available
An important technical distinction regarding the collection of paradata in web surveys is that they can be collected on the server side and/or the client side. In web surveys, paradata is categorized into device-type paradata and questionnaire navigation paradata. Device-type paradata provide information regarding the kind of device used to complete the survey. Questionnaire navigation paradata describe the entire process of filling out the questionnaire. This chapter provides examples of usage for device-type and questionnaire navigation paradata. Another use of paradata pioneered in the early 2000s by Jeavons is adaptive scripting. Adaptive scripting refers to using paradata in real time to change the survey experience for the respondent. The chapter also discusses two main classes of software to collect paradata such as specific paradata software and paradata collection tools embedded in commercial and non-commercial survey platforms. Ethical and communication issues are important considerations in using web survey paradata.
Article
Full-text available
The activity model based on 3D acceleration and gyroscope is created in this paper, and the difference between the activities of daily living (ADLs) and falls is analyzed at first. Meanwhile, the k NN algorithm and sliding window are introduced to develop a smart device enabled system for fall detection and alert, which is composed of a wearable motion sensor board and a smart phone. The motion sensor board integrated with triaxial accelerometer, gyroscope, and Bluetooth is attached to a custom vest worn by the elderly to capture the reluctant acceleration and angular velocity of ADLs in real time. The stream data via Bluetooth is then sent to a smart phone, which runs a program based on the k NN algorithm and sliding window to analyze the stream data and detect falls in the background. At last, the experiment shows that the system identifies simulated falls from ADLs with a high accuracy of 97.7%, while sensitivity and specificity are 94% and 99%, respectively. Besides, the smart phone can issue an alarm and notify caregivers to provide timely and accurate help for the elderly, as soon as a fall is detected.
Article
Full-text available
Survey research is changing in a more rapid pace than ever before, and the continuous and exponential growth in technological developments is not likely to slow down. Online surveys are now being completed on a range of different devices: PC, laptops, tablets, mobile phones or hybrids between these devices. Each device varies in screen sizes, modes of operationalization and technological possibilities. We define online surveys that are in practice being completed on different devices as mixed-device surveys. This special issue discusses issues in the design and implementation of mixed-device surveys, with the aim to bring survey research to the next level: in our view all web surveys should from now be thought of as mixed-device surveys. Theory and best practices for mixed-device surveys are still in its infancy. The current state of knowledge about the dynamics of taking surveys on mobile devices is not as advanced as necessary in times of rapid change. While current technology opens great possibilities to collect data via text, apps, and visuals, there is little scientific research published about the actual uses and best practices of these applications to increase data quality. Researchers and survey methodologists in particular need to find ways to keep up with fast changing technologies.
Article
Full-text available
The considerable growth in the number of smart mobile devices with a fast Internet connection provides new challenges for survey researchers. In this article, I compare the data quality between two survey modes: self-administered web surveys conducted via personal computer and those conducted via mobile phones. Data quality is compared based on five indicators: (a) completion rates, (b) response order effects, (c) social desirability, (d) non-substantive responses, and (e) length of open answers. I hypothesized that mobile web surveys would result in lower completion rates, stronger response order effects, and less elaborate answers to open-ended questions. No difference was expected in the level of reporting in sensitive items and in the rate of non-substantive responses. To test the assumptions, an experiment with two survey modes was conducted using a volunteer online access panel in Russia. As expected, mobile web was associated with a lower completion rate, shorter length of open answers, and similar level of socially undesirable and non-substantive responses. However, no stronger primacy effects in mobile web survey mode were found.
Article
Full-text available
Grid or matrix questions are associated with a number of problems in web surveys. In this article, we present results from two experiments testing the design of grid questions to reduce breakoffs, missing data, and satisficing. The first examines dynamic elements to help guide respondent through the grid, and on splitting a larger grid into component pieces. The second manipulates the visual complexity of the grid and on simplifying the grid. We find that using dynamic feedback to guide respondents through a multiquestion grid helps reduce missing data. Splitting the grids into component questions further reduces missing data and motivated underreporting. The visual complexity of the grid appeared to have little effect on performance.
Article
Full-text available
We propose a framework of ways in which the different context of mobile interviews—such as multi-tasking, distraction, and the presence of others—and differences inherent in the technology can influence survey responses. The framework also highlights the mechanisms through which these influences operate. We evaluate selected elements of the framework using data from a randomized experiment in which respondents were interviewed by mobile or landline. Measures of interview context were gathered via interviewer evaluation, respondent perception, and direct questioning. We find less social desirability bias with mobile phone interviews, but overall only small differences between mobile and landline interviews.
Article
The use of agree/disagree (A/D) questions is a common technique to measure attitudes. For instance, this question format is employed frequently in the Eurobarometer and International Social Survey Programme (ISSP). Theoretical considerations, however, suggest that A/D questions require a complex processing. Therefore, many survey researchers have recommended the use of item-specific (IS) questions, since they seem to be less burdensome. Parallel to this methodological discussion is the discussion around the use of mobile devices for responding to surveys. However, until now, evidence has been lacking as to whether the use of mobile devices for survey response affects the performance of established question formats. In this study, implemented in the Netquest panel in Spain (N = 1,476), we investigated the cognitive effort and response quality associated with A/D and IS questions across PCs and smartphones. For this purpose, we applied a split-ballot design defined by device type and question format. Our analyses revealed longer response times for IS questions than A/D questions, irrespective of the device type and scale length. Also, the IS questions produced better response quality than their A/D counterparts. All in all, the findings indicate a more conscientious response to IS questions compared to A/D questions.
Article
Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions, in contrast, offer content-related response categories, implying less cognitive processing. To investigate the respective cognitive effort and response quality associated with A/D and IS questions, we conducted a web-based experiment with 1,005 students. Cognitive effort was assessed by response times and answer changes. Response quality, in contrast, was assessed by different indicators such as dropouts. According to our results, single IS questions require higher cognitive effort than single A/D questions in terms of response times. Moreover, our findings show substantial differences in processing single and grid questions.
Article
Much research has been done comparing grids and item-by-item formats. However, the results are mixed, and more research is needed especially when a significant proportion of respondents answer using smartphones. In this study, we implemented an experiment with seven groups (n = 1,476), varying the device used (PC or smartphone), the presentation of the questions (grids, item-by-item vertical, item-by-item horizontal), and, in the case of smartphones only, the visibility of the “next” button (always visible or only visible at the end of the page, after scrolling down). The survey was conducted by the Netquest online fieldwork company in Spain in 2016. We examined several outcomes for three sets of questions, which are related to respondent behavior (completion time, lost focus, answer changes, and screen orientation) and data quality (item missing data, nonsubstantive responses, instructional manipulation check failure, and nondifferentiation). The most striking difference found is for the placement of the next button in the smartphone item-by-item conditions: When the button is always visible, item missing data are substantially higher.
Article
Web surveys are commonly used in social research because they are usually cheaper, faster, and simpler to conduct than other modes. They also enable researchers to capture paradata such as response times. Particularly, the determination of proper values to define outliers in response time analyses has proven to be an intricate challenge. In fact, to a certain degree, researchers determine them arbitrarily. In this study, we use “SurveyFocus (SF)”—a paradata tool that records the activity of the web-survey pages—to assess outlier definitions based on response time distributions. Our analyses reveal that these common procedures provide relatively sufficient results. However, they are unable to detect all respondents who temporarily leave the survey, causing bias in the response times. Therefore, we recommend a two-step procedure consisting of the utilization of SF and a common outlier definition to attain a more appropriate analysis and interpretation of response times.
Article
Purpose – Despite the quick spread of the use of mobile devices in survey participation, there is still little knowledge about the potentialities and challenges that arise from this increase. The purpose of this paper is to study how respondents’ preferences drive their choice of a certain device when participating in surveys. Furthermore, this paper evaluates the tolerance of participants when specifically asked to use mobile devices and carry out other specific tasks, such as taking photographs. Design/methodology/approach – Data were collected by surveys in Spain, Portugal and Latin America by Netquest, an online fieldwork company. Findings – Netquest panellists still mainly preferred to participate in surveys using personal computers. Nevertheless, the use of tablets and smartphones in surveys showed an increasing trend; more panellists would prefer mobile devices, if the questionnaires were adapted to them. Most respondents were not opposed to the idea of participating in tasks such as taking photographs or sharing GPS information. Research limitations/implications – The research concerns an opt-in online panel that covers a specific area. For probability-based panels and other areas the findings may be different. Practical implications – The findings show that online access panels need to adapt their surveys to mobile devices to satisfy the increasing demand from respondents. This will also allow new, and potentially very interesting data collection methods. Originality/value – This study contributes to survey methodology with updated findings focusing on a currently underexplored area. Furthermore, it provides commercial online panels with useful information to determine their future strategies.
Article
More and more respondents use mobile devices to complete web surveys. These devices have different characteristics, if compared to PCs (e.g. smaller screen sizes and higher portability). These characteristics can affect the survey responses, mostly when a questionnaire includes sensitive questions. This topic was already studied by Mavletova and Couper (2013), through a two-wave experiment comparing PCs and mobile devices results for the same respondents in a Russian opt-in panel. We replicated this cross-over design, focusing on an opt-in panel for Spain, involving 1,800 panellists and comparing PCs and smartphones. Our results support most of Mavletova and Couper’s (2013) findings (e.g. generally the used device does not significantly affect the reporting of sensitive information), confirming their robustness over the two studied countries. For other results (e.g. trust in data confidentiality), we found differences that can be justified by the diverse context/culture or by the quick changes that are still characterizing the mobile web survey participation.
Article
Conforming to W3C specifications, mobile web browsers allow JavaScript code in a web page to access motion and orientation sensor data without the user's permission. The associated risks to user security and privacy are however not considered in W3C specifications. In this work, for the first time, we show how user security can be compromised using these sensor data via browser, despite that the data rate is 3–5 times slower than what is available in app. We examine multiple popular browsers on Android and iOS platforms and study their policies in granting permissions to JavaScript code with respect to access to motion and orientation sensor data. Based on our observations, we identify multiple vulnerabilities, and propose TouchSignatures which implements an attack where malicious JavaScript code on an attack tab listens to such sensor data measurements. Based on these streams, TouchSignatures is able to distinguish the user's touch actions (i.e., tap, scroll, hold, and zoom) and her PINs, allowing a remote website to learn the client-side user activities. We demonstrate the practicality of this attack by collecting data from real users and reporting high success rates using our proof-of-concept implementations. We also present a set of potential solutions to address the vulnerabilities. The W3C community and major mobile browser vendors including Mozilla, Google, Apple and Opera have acknowledged our work and are implementing some of our proposed countermeasures.
Article
Computers play an important role in everyday multitasking. Within this context, we focus on respondent multitasking (RM) in web surveys. RM occurs when users engage in other activities while responding to a web survey questionnaire. The conceptual framework is built on existing literature on multitasking, integrating knowledge from both cognitive psychology and survey methodology. Our main contribution is a new approach for measuring RM in web surveys, which involves an innovative use of the different types of paradata defined as non-reactive electronic tracks concerning respondents' process of answering the web questionnaire. In addition to using questionnaire page completion time as a measure of RM, we introduce 'focus-out' events that indicate when respondents have left the window containing the web questionnaire (e.g., to chat, email, browse) and then returned. The approach was tested in an empirical study using a web survey on a student sample (n = 267). The results indicate that 60% of respondents have multitasked at least once. In addition, they reveal that item nonresponse as an indicator of response quality is associated with RM, while non-differentiation is not. Although this study confirms that a paradata-based approach is a feasible means of measuring RM, future research on this topic is warranted.
Article
Survey researchers are making increasing use of paradata - such as keystrokes, clicks, and timestamps - to evaluate and improve survey instruments but also to understand respondents and how they answer surveys. Since the introduction of paradata, researchers have been asking whether and how respondents should be informed about the capture and use of their paradata while completing a survey. In a series of three vignette-based experiments, we examine alternative ways of informing respondents about capture of paradata and seeking consent for their use. In all three experiments, any mention of paradata lowers stated willingness to participate in the hypothetical surveys. Even the condition where respondents were asked to consent to the use of paradata at the end of an actual survey resulted in a significant proportion declining. Our research shows that requiring such explicit consent may reduce survey participation without adequately informing survey respondents about what paradata are and why they are being used.
Article
Everyday life requires frequent shifts between cognitive tasks. Research reviewed in this article probes the control processes that reconfigure mental resources for a change of task by requiring subjects to switch frequently among a small set of simple tasks. Subjects' responses are substantially slower and, usually, more error-prone immediately after a task switch. This 'switch cost' is reduced, but not eliminated, by an opportunity for preparation. It seems to result from both transient and long-term carry-over of 'task-set' activation and inhibition as well as time consumed by task-set reconfiguration processes. Neuroimaging studies of task switching have revealed extra activation in numerous brain regions when subjects prepare to change tasks and when they perform a changed task, but we cannot yet separate 'controlling' from 'controlled' regions.
Internet, mail, and mixed-mode surveys: The tailored design method
  • D A Dillman
  • J D Smyth
  • L M Christian
Sensor data: Measuring acceleration of smartphones in mobile web surveys. Poster presented at the general online research conference
  • S Schlosser
  • J K Höhne
Using paradata to explore item level response times in surveys
  • M P Couper
  • F Kreuter
Describing response behavior in web surveys using client side paradata. Paper presented at the international workshop on web surveys
  • D Heerwegh
Does the continuity of web-survey processing matter? Poster presented at the conference of the European survey research association
  • S Schlosser
  • J K Höhne
Is the smartphone participation affecting the web survey experience?
  • D Toninelli
  • M Revilla