ArticlePDF Available

PC, phone or tablet? Use, preference and completion rates for web surveys

Authors:
Please cite as
Brosnan, K., Gruen, B., & Dolnicar, S. (2017). PC, phone or tablet? Use, preference
and completion rates for web surveys. International Journal of Market Research,
59(1), 35-56.
http://dx.doi.org/10.2501/IJMR-2016-049
PC, phone or tablet?
Use, preference and completion rates for web surveys
Online market research is struggling with sample representativity. Representativity is
undermined if personal computer, tablet and smartphone respondents differ in socio-
demographic characteristics and display different survey completion rates. This
study investigates whether this is the case. The analysis of more than ten million
survey invitations as well as stated device preference information suggests that web
survey respondents who are members of online panels still mostly use their personal
computers, but do express increasing interest in using smartphones and tablets.
Survey completion rates do vary across devices and device use is significantly
associated with socio-demographic characteristics and length of membership on a
panel. Therefore, researchers must not limit respondents to use a specific device for
completing a survey or it may compromise the quality of the survey completion
experience, increase non-response error and negatively affect representativity.
Introduction
The use of mobile devices to complete web surveys has risen significantly in recent
years (Kinesis 2013; de Bruijne & Wijnant 2014a; Wang & McCutcheon 2014).
Research Now – a global online research panel company – reports an increase from
ten percent in 2012 to 24 percent in 2014 and the Gallup Panel from nine to 26
percent from 2012 to 2014 (Wang & McCutcheon 2014). This trend is already
impacting the market and social research industry, where mobile surveys are
perceived as the most important new data collection method (GreenBook Research
Industry Trends Report (GRIT) 2014). Forty eight percent of market research
suppliers are already using mobile surveys, 39 percent have mobile surveys under
consideration (GRIT 2014), 42 percent are able to deliver surveys using mobile apps
and 70 percent are able to run them via mobile browsers (Confirmit 2013). Many
speculate that the era of exclusively personal computer based online surveys is over
and almost half of the market research suppliers are exploring new technologies and
new methodologies (GRIT 2014).
Despite the increased demand for completing surveys on mobile devices and
despite the ability of web survey technology to identify which device a respondent is
using and directing them to the most suitable survey implementation (de Bruijne &
Wijnant 2014b), many market research companies have been slow to optimise
questionnaires beyond merely adjusting the screen size. Multimodal web surveys
which assume that respondents will use a range of devices to complete their web
surveys require different questionnaire design, specifically shorter response lists,
no grid items, short questions, and limited scrolling (Wells, Bailey & Link 2014).
P a g e | 1 of 22
Measurement bias might occur when questions are not adapted for the device. But
measurement bias is not the focus of the present study. It should be noted, however,
that measurement bias can further exacerbate the problem of device use being
associated with personal characteristics of respondents. Whilst Wells (2015) calls for
market researchers to accommodate device preference and availability, little
guidance is provided on how exactly to achieve this, especially if device optimisation
is not limited to screen size compatibility, but aims to actually ensure a pleasant
survey experience.
The increasing availability of web-enabled mobile devices and, consequently, the
likely increase in the demand for completing web surveys on such web-enabled
mobile devices could have serious negative implications for online survey research.
Sample representativity can be negatively affected if a survey is not available on
all devices or not developed in a way that is identical and equally functional on all
possible devices as is currently the case and if device preference is associated
with socio-demographic characteristics of respondents – like age and gender. Such a
situation can introduce systematic non-response bias and decrease the validity of
the findings. Sample representativeness has been highlighted as a critical issue in
the market research industry repeatedly, including by ESOMAR, the World
Association for Social, Opinion and Market Research, and the Global Research
Business Network (GRBN), which issued a Guideline for Online Sample Quality
discussing in detail the need for improving the representativity of the sample
(ESOMAR and GBRN 2015).
In addition to the direct non-response effect resulting from certain subgroups of
the population not being represented in the sample, an indirect non-response effect
can arise if completion rates differ across survey devices. Completion rates could
differ systematically if respondents are not enjoying their survey experience due to a
lack of survey optimisation for mobile devices. Survey software supplier
SurveyMonkey (2014) finds that conducting web surveys on mobile devices requires
fewer questions per page and the increased use of multiple choice questions. Lack
of such optimisation is likely to make completing web surveys on mobile devices
frustrating. Such frustration will, in turn, negatively affect completion rates. Even
when surveys have been optimised for mobile devices completion rates appear to be
lower on such devices (Stapleton 2011; Buskirk & Andrus 2012; Mavletova 2013;
Mavletova & Couper 2013; McGeeney & Marlar 2013). If completion rates differ
systematically across devices, the validity of findings can be affected. This problem
could be further exacerbated by respondents’ low willingness to switch devices
(McClain et al. 2012; Millar & Dillman 2012; Peterson 2012; Link et al. 2013; Vision
Critical 2014). Once potential survey respondents open the web survey on their
preferred device or the device available to them, they may find that the lack of
optimisation makes the survey completion experience unpleasant and choose to
stop answering before completing the survey.
The present study investigates whether there is empirical evidence for the two
problems above. Specifically, the study investigates using both observational and
self-report data from members of online survey panels – (1) if there is evidence for a
direct negative effect of non-response bias on web survey representativity resulting
from an association of socio-demographic respondent characteristics with device
preference and use, and (2) if there is empirical evidence of an indirect non-
response bias due to completion rates varying across devices used for web surveys.
The following two research questions are addressed:
P a g e | 2 of 22
Research Question #1: Do personal computer, tablet and smartphone
respondents differ in socio-demographic characteristics?
Research Question #2: Do personal computer, tablet and smartphone
respondents display different survey completion rates?
Findings contribute to knowledge on sources of systematic error in web surveys.
Results are of immediate practical relevance as they provide guidance to the market
research industry on how to improve the representativity of samples for web surveys
and how to keep completion rates as high as possible despite an increasing shift of
web respondents to web-enabled mobile devices.
Prior Work
Survey administration effects on completion rates and representativity
Surveys can be administered personally in a face to face situation, they can be
mailed out to respondents for self-completion, they can be administered via the
telephone or via the internet. The effects of such variations in the mode of survey
delivery have been extensively studied and there is broad consensus that mode
affects the nature of the sample that is captured. For example, survey
methodologists such as Dillman (1978, 1991, 2000, 2006, 2007, 2011) and Groves
(2004, 2006, 2011, 2012) have created an entire body of work investigating the
influence of administration mode on completion rates with the aim of improving
representativity of survey studies.
Modal shifts over time have typically been driven by advancing technology and
the pressure to reduce the cost of surveys. New modes – in the recent past primarily
web surveys have been subject to scrutiny for coverage error with sample
representativity depending on the target audience’s access to the new technology.
Knowledge on the effect of completion mode on sample representativity has to
date been primarily derived using three different methodological approaches: (1)
empirical and experimental studies based on a single survey (Dillman 2000, 2007,
2011; Miller & Dillman 2012; Dillman et al. 2014) which is conducted on a specific
topic (Kaplowitz et al. 2004; Peytchev & Hill 2010; de Bruijne & Wijnant 2013b), (2)
meta-analyses of a number of published studies linking administration mode to
sample representativity (Cook, Heath & Thompson 2000; Manfreda et al. 2008; Shih
& Fan 2008), and (3) longitudinal studies which manipulated the way the survey was
conducted across more than one survey wave administrated to the same
respondents (de Leeuw & de Heer 2002; Tortora 2009; Link 2013). Most of these
studies conclude that web surveys result in lower completion rates compared to
telephone or face to face surveys. Compared with mail surveys, results have been
mixed with variability mainly due to access to the internet.
Most recently, a number of studies have focused on different devices that can be
used to complete web surveys, including personal computers, tablets and
smartphones. This body of work consistently finds that completion rates vary by
device (Buskirk & Andrus 2012; de Bruijne & Wijnant 2013b; de Bruijne & Wijant
2014a; Mavletova 2013; Mavletova & Couper 2014; Millar & Dillman 2012; Peytchev
& Hill 2010; Wells et al. 2014). Specifically, completion rates appear to be lower on
mobile devices than on personal computers (de Bruijne & Wijnant 2013b), although
some studies detect no difference (Antoun 2014).
P a g e | 3 of 22
Device use and respondent characteristics
A number of studies have investigated device preferences in web surveys (see
Wells (2015) and Link et al. (2014) for an overview). A key finding from this stream of
research is that mobile devices are increasingly used to participate in web surveys.
As people spend more time with their smartphone it becomes the obvious alternative
web survey completion device. It is known, for example, that the device on which the
email invitation is read is more likely to be used for survey completion (de Bruijne &
Wijnant 2013b). Tablet use is also dramatically increasing in web survey completion
(de Bruijne & Wijnant 2014a). Couper (2013) observes that when presented with
the option to choose a device most respondents now choose a mobile device (57
percent). Lifestyle factors such as device ownership, number of mobile apps, email
usage on mobile devices, frequency of mobile web browser usage and frequency of
device use, all influence which device is used for web surveys (de Bruijne & Wijnant
2013b; Couper 2013; Mavletova 2013; Liebe, Glenk, Oehlmann & Meyerhoff 2015).
A second key finding is that panel members increasingly refuse to complete web
surveys on personal computers. This observation is independent of whether or not
the survey is optimised for mobile devices or not (de Bruijne & Wijnant 2013b).
Finally, a number of studies point to an association between device preference or
availability and socio-demographic characteristics of respondents, especially age (de
Bruijne & Wijnant 2013b; Mavleto 2013; Toepoel & Lugtig 2014). These findings have
major implications for the representativity of samples. If indeed such associations
exist and – as is currently the case in the market research industry – surveys are not
available on all devices or are not identical on all devices or are not equally
functional on all devices, availability of a range of electronic devices to complete the
survey may negatively influence the representativity of samples.
Device use and completion rates
Commercial market research panels or online access panels contain a group of
people who have been recruited and agreed to participate in web surveys and have
access to the internet (Göritz et al. 2002). Samples are regularly drawn from these
panels and invited to participate in a range of surveys; however there has been
limited use of a common framework to report response rates accurately (Callegaro &
DiSogra 2008). Despite Callegaro and DiSogra’s (2008) efforts to develop a way of
calculating a response rate there has been a lack of meta-analyses to investigate
trends or changes in responses from online access panels over the past seven years
with specific attention to the increasing trend towards mobile device use. There is
little empirical evidence that a decline or increase in response rates for web surveys
has occurred as new survey design techniques such as device detection and
optimisation are employed. Whilst Callegaro (2013) suggests collecting paradata to
monitor device use in web surveys, only little has been published on this to date. The
literature is also limited in terms of investigating commercial market research panel
members to determine if response behaviour is associated with device use.
Prior studies investigating web survey administration effects on completion rates
and representativity, device use and respondent characteristics, device use and
completion rates, share some common findings, however they all have one key
limitation. They investigate individual surveys only which may be affected by other
known response effects (e.g. topic salience, incentive payments, survey length) thus
limiting the generalisability of findings. In the present study data from more than one
P a g e | 4 of 22
panel, more than one survey invitation, more than one survey and more than one
point in time are analysed.
Methodology
To answer both research questions data need to contain: (1) socio-demographics
information about web respondents, (2) the device used or stated choice of the
device on which they wish to complete the survey and (3) their completion rate of
surveys given the device used. The three data sets used in the present study
jointly – contain all this information.
Data set #1 was provided by a global online access panel used for commercial
market and social research. It was compiled in the first four months of 2015 and
contains information about 10,006,139 invitations, 2,876,823 started and 2,330,070
finished web surveys in Great Britain and Australia varying in survey topic, survey
length, industry, and incentives. Figure 1 shows the points in time at which data has
been collected. Started surveys are those where invited respondents start
completing the survey by clicking on the survey link. Whether they clicked on the
survey link using their smartphone, tablet or personal computer was recorded and
the surveys were optimised for the device used. Finished surveys are those that
have been either completed or where respondents were screened out because in
both these cases – respondents complied with what was asked of them.
--- Please insert Figure 1 about here ---
Data set #2 includes the history of 9,702 Australian panel members who received
133,931 invitations by an Australian web survey company in the first four months of
2015 and finished 22,871 surveys. Device use was recorded and the surveys were
optimised for device used.
Data set #3 is a survey data set collected by a global online survey company in
March 2015. It contains responses from 1,507 Canadian survey respondents to the
question: “What proportion of surveys would you complete on the following devices?”
Answer options included personal computer, tablet and smartphone. The device on
which the survey was actually completed was also detected and recorded. The
survey was optimised for device used.
In addition, all data sets contain socio-demographic characteristics of
respondents, including gender and age. The length of respondents’ membership on
the commercial research panel and their completion rates are also available for data
sets #2 and #3. For data set #2 the completion rate is calculated as the ratio
between finished and invited surveys over the duration of four months in 2015. For
data set #3 the completion rate is calculated as the ratio between finished and
invited surveys over the duration of a respondent’s entire panel membership. Length
of membership was coded as a binary variable: less than or equal to one year or
more than one year. Person-specific completion rates were also binarised as either
high (> 50 percent) or low (≤ 50 percent).
Actual device use and stated device preference are modelled in dependence of
age, gender and their interaction as well as where available in dependence of
length of panel membership and panel completion rate. These panel member
characteristics were introduced into the analysis to ensure that possible interactions
P a g e | 5 of 22
between them and the basic sociodemographic characteristics would not impact on
the validity of the conclusions drawn from the analysis.
Multinomial logit models are fitted using maximum likelihood estimation using
device used as dependent variable and the socio-demographic and panel-specific
respondent characteristics as explanatory variables. Likelihood ratio tests indicate if
dropping any of these explanatory variables leads to a significant reduction in
goodness-of-fit (as measured by deviance). The importance of the socio-
demographic factors as well as the other panel-specific respondent characteristics
for predicting device use are assessed by comparing models including and excluding
these factors and determining the difference in deviance. Furthermore, average
proportions of the device used are determined by selecting different levels for a
specific factor to indicate how much these differ when the levels of one of the factors
change, but all others have an “average” value, for example average proportions of
device used are determined for men and women separately based on the fitted
multinomial logit models and using average values for all other explanatory
variables, in order to give an indication of the effect of gender on device use (see
Fox and Hong, 2009). The completion rates by device used, age group, gender and
their interactions available for data set #1 are analysed using binomial logit models
fitted with maximum likelihood. Likelihood ratio tests and differences in deviance are
again employed to assess statistical significance and importance of these different
factors and average completion rates are compared.
Results
Research Questions #1: (a) Do personal computer, tablet and smartphone
respondents differ in socio-demographic characteristics? (b) Do they differ in
panel member characteristics?
Data sets #1 and #2 were used to test the association of device use and
respondent characteristics. Both data sets are observational and thus retrospective
in nature. As can be seen in Figure 2, results for data set #1 indicate that the
personal computer still dominates the web survey market with depending on
gender and age group between 85 and 94 percent finishing the web survey on
their computer. Tablets have been used by between four and ten percent of
respondents and smartphones by between zero and six percent.
--- Please insert Figure 2 about here ---
Figure 2 also clearly illustrates the association between social-demographics and
device use. Younger people tend to finish surveys on smartphones with an average
of five percent in the youngest age group compared to less than one percent in the
oldest age group. Older people when substituting the personal computer are
more likely to use their tablet. On average eight percent of the surveys submitted by
respondents in the oldest age group were completed on tablets and five percent by
the youngest category of respondents.
Men on average complete more surveys on the personal computer (93
percent as opposed to 89 percent for women), while women substitute the personal
computer with their tablet (nine percent as opposed to five percent for men of
surveys finished). The interaction effect between gender and age group is
statistically significant, even if the differences in deviance indicate that the main
effects of gender and age group dominate the interaction effect. This means that the
P a g e | 6 of 22
age effect is approximately comparable for men and women. Details on the statistical
test results are provided in Table 1.
--- Please insert Table 1 about here ---
Device use was also analysed for data set #2. The average proportions of device
use by age group, gender and their interaction as well as length of panel
membership and responsiveness are shown in Figure 3. As can be seen, the
personal computer dominates the web survey market with at least 59 percent using
this device to finish a survey. Significant differences exist with respect to age,
gender, length of panel membership and responsiveness (see Table 1 for statistical
test results). As in the case of data set #1, respondents in the youngest age group
use smartphones more frequently for finishing surveys (16 percent as opposed to
two percent in the oldest age group), whereas tablet use is more popular with older
respondents (11 percent in the oldest age group as opposed to six percent in the
youngest age). Males use the personal computer more frequently (85 percent) on
average than women (78 percent), who use the tablet more often (13 percent as
opposed to eight percent for men). Respondents who have been members of a
panel for more than one year are more likely to use a personal computer (82 percent
on average as opposed to 76 percent among new panel members). Study
respondents with lower completion rates are more likely to use a smartphone or
tablet (ten versus six percent for smartphone, 12 versus eight percent on tablet).
--- Please insert Figure 3 about here ---
Data set #3 contains people’s stated device preference. As such it is not
retrospective but rather offers insights into the present and future. Despite the
different nature of this data set, results reflect the key insights gained from the
analysis of data sets #1 and #2. Age, length of panel membership, and completion
rates are significantly associated with device use (see Table 1 for test results). As
opposed to the findings from the retrospective data, gender effects are not
statistically significant. Gender is therefore dropped from subsequent analyses.
Figure 4 shows that the dominance of the personal computer remains; between
45 percent and 83 percent of surveys are planned to be completed using it for web
surveys. Smartphone and tablet preference are higher than in the observational
retrospective data (between two and 27 percent on smartphones and between 14
and 30 percent for tablets). Younger respondents tend to substitute the computer
with the smartphone with an average indicated use of the computer of 67 percent
and of the smartphone of 17 percent. The equivalent figures for the oldest age group
are 81 percent for the computer and two percent for the smartphone. Established
panel members are more likely to use a personal computer (75 percent as compared
to 67 percent) than a smartphone or tablet and respondents with lower overall
completion rates are more likely to use the tablet (24 percent as compared to 17
percent).
--- Please insert Figure 4 about here ---
Examining the actual device that was used to complete the survey on stated
device preference confirms a strong alignment of behaviour with respondents’ stated
preference. Those responding on the personal computer state that they intend to use
P a g e | 7 of 22
the personal computer on average for 79 percent of surveys filled in, as compared to
seven percent on the smartphone and 14 percent on the tablet. By contrast
respondents filling out on the tablet on average state to use for 73 percent of surveys
the tablet in the future and only 23 percent the personal computer and four percent
the smartphone. Respondents on the smartphone are least loyal to their device,
indicating only an average future use of 58 percent with 30 percent personal
computer and 12 percent smartphone.
Research Question #2: Do personal computer, tablet and smartphone
respondents display different survey completion rates?
Data set #1 enables calculation of completion rates across all surveys contained
in the data and in dependence of device used. As can be seen in Figure 5
completion rates vary by device used, gender, age and the interactions thereof.
Statistical tests are provided in Table 2. Completion rates are higher for personal
computers (83 percent on average) than they are for smartphones (63 percent on
average) or tablets (66 percent on average) and the effect of device use on
completion rates is substantially higher than the socio-demographic characteristics of
respondents.
--- Please insert Figure 5 about here ---
Socio-demographics are significantly associated with completion rates.
Specifically, men have higher average completion rates across all devices than
women with 84 percent on the personal computer as compared to 82 percent for
women, 64 percent on the smartphone as compared to 63 and 68 percent on the
tablet as compared to 65 percent.
--- Please insert Table 2 about here ---
Discussion and Conclusions
The present study investigated whether web survey respondents using their personal
computer, tablet or smartphone differ in personal characteristics and survey
completion rates. This question is critical to understand in order to assess potentially
negative implications of web survey device offerings on sample representativity. The
unique aspect of this study is that it is based on two data sets containing very large
samples of actual survey completion behaviour of a large group of individuals over a
large number of survey invitations during a four month period in 2015, thus
permitting more general conclusions than studies based on single surveys.
Key findings include:
(1) Web survey respondents still mostly use their personal computers, but
do express increasing interest in using smartphones and tablets. Across
all web survey respondents included in the retrospective observational data
set #2 used for the present study 81 percent of respondents used personal
computers at least once during the observation period to finish a survey and
across all the 1,509 respondents asked in the Canadian study to indicate their
device preference, 74 percent state that they would use the personal
computer at least for half of the surveys. These findings contradict the
conclusions from Couper (2013) and de Bruijne and Wijnant (2013a) that the
personal computer is close to the end of its lifetime as a web survey data
collection device. Our findings confirm, however, conclusions drawn in prior
P a g e | 8 of 22
work (Wells 2015; Link et al. 2014) that there is indeed an increasing interest
in completing web survey on mobile devices.
(2) Device use is significantly associated with socio-demographic
characteristics. Both the observed survey completion data sets as well as
the data on stated preferences for web survey devices show a clear and
statistically significant association between device use and respondent
characteristics. These findings confirm conclusions draw by de Bruijne and
Wijnant (2013b), Mavleto (2013) and Toepoel and Lugtig (2014) using single
survey study research designs.
(3) Survey completion rates vary across devices. The observational data
indicates that there is a significant difference in the completion rates of web
surveys in dependence of the device used on surveys that have been
optimised for mobile completion. Others have concluded that survey
satisfaction requires further investigation (de Bruijne & Wijnant 2013a; Couper
2013) with the willingness of completing surveys on a smartphone or tablet
being a barrier to completion regardless of functionality.
(4) Longevity of panel membership and personal computer usage is highly
correlated. The longer a respondent is a member of the online panel, the
more inclined they are to use a personal computer. This may be a training
effect caused by poor survey experiences on mobile devices. This is an area
that has not been studied at all in the past and deserves more attention.
Specifically, whether panel members need more training on using surveys on
mobile devices or whether researchers should focus on improving mobile
device survey experience needs further investigation. It should be noted that
panel member characteristics were not originally included in our hypotheses.
Rather, they were included during analysis to avoid misinterpretation due to
possible interactions between predictor variables. Key finding (4) therefore
represents an accidental discovery. One which points to interesting future
avenues of inquiry.
These findings lead to the conclusion that the issue of device use can have major
implications on representativity of web survey samples. Representativity can be
negatively effected both directly via the association of device preference with
personal characteristics of panel members and indirectly via reduced completion
rates on smartphones and tablets, which are exacerbated by suboptimal survey
design for these devices. Suboptimal design in terms of both lack of adjustment of
screen size (Stapleton 2011; Buskirk & Andrus 2012; Mavletova 2013; Mavletova &
Couper 2013; McGeeney & Marlar 2013) as well as lack of optimisation for a good
survey experience (de Bruijne & Wijnant 2013a; Couper 2013) have been shown to
negatively influence completion rates, yet recent empirical studies (de Bruijne &
Wijnant 2013a) suggest that optimisation alone does not fully explain the lower
completion rates. Further studies are required to determine whether indeed there is
an additional reason leading to systematically lower completion rates when
smartphones are used for answering web surveys.
Practical Implications
Key practical implications for the market research industry and academic
researchers using web surveys include:
P a g e | 9 of 22
Device use cannot be seen as an isolated issue. Decisions about which device
options to offer in any given survey need to be made in view of the implications
on completion rates and sample representativity.
In future web surveys have to be genuinely optimised for mobile devices. In so
doing, younger respondents can be attracted who demonstrate an interest in
using the smartphone for survey completion and older respondents who choose
to substitute their personal computer with a tablet. This is not a new finding, but it
is one that needs to be reinforced because literature on question adaption and
modification is limited, and 50 percent of web surveys are still not being optimised
for mobile devices (GRIT 2015) and where surveys are optimised for screen size
they are still not designed for optimal survey experience. Cross-device surveys
where all respondents see the exact same presentation of the question is
needed. Some of the currently most popular personal computer question formats,
such as line scales and grids, cannot be directly converted and used for mobile
devices, and are thus likely to cause measurement bias on top of the non-
response bias. Further research to provide guidelines for questionnaire design
minimizing the measurement bias introduced due to differences in device use is
needed.
The personal computer still plays a huge role in web surveys. It is not likely that
this will change in the near future. Thus, it is important not to neglect improving
personal computer interfaces as well.
Non-probability samples drawn from online access panels often use weight
adjustment procedures based on pseudo design-based methods (Schonlau, van
Soest & Kapteyn 2007; Kreuter et al. 2011) and model-based estimation (Dever,
Rafferty, & Valliant 2008; Tourangeau, Conrad, & Couper 2013). However the
literature on the effectiveness of such weighting procedures have been mixed
and inconclusive and more work is needed. The inclusion of device use as a
paradata variable in these models would need further investigation to determine if
it could assist in obtaining valid estimates for the population under study.
However, given the problems associated with these statistical weighting methods
used to correct for coverage and non-response bias issues, it seems preferable
to improve the survey experience in order to obtain higher completion rates from
those who prefer to use a mobile device.
Wells (2015) concluded that mobile surveys are not currently ready to be a
complete replacement for personal computer surveys therefore encouraging survey
methodologists to consider dual mode designs between different devices such as
smartphone and personal computer. This study reinforces this conclusion and should
open up further discussion on how to apply the considerations of a mixed device
mode methodology, including weighting implications, sampling selection and
probability, and further impact the design and survey format will have on data.
This study’s strength lies in the large data sets containing information about
actual web survey participant behaviour over a period of time with many survey
invitations negating the individual characteristics of the survey (topic salience,
incentives and length) and focusing on the profile of the participant. This, at the
same time, is its biggest weakness, as it makes it impossible to ask “Why?”. For
example if there are other environmental or contextual factors impacting on device
use like males in full-time work who are taking surveys at their workplace on a
personal computer, or older females who like to do their surveys on their tablet at
P a g e | 10 of 22
home or people who like to participate in surveys using a mobile device on their daily
commute. It is important, therefore, to conduct rigorous qualitative research into why
people choose different devices to complete web surveys. Such qualitative research
can shed light on when, where and under what circumstances people respond to
web surveys and on which device.
References
Antoun, C. (2014) Nonresponse in a mobile-web survey: a look at the causes and
the performance of different predictive models. Paper presented at the annual
meeting of the American Association for Public Opinion Research, Anaheim,
California, 15-18 May. (Accessed 10 September 2015).
Antoun, C. & Couper, M.P. (2013) Mobile-mostly internet users and selection bias in
traditional web surveys. Paper presented at the annual meeting of the Midwest
Association for Public Opinion Research, Chicago, Illinois, 22-23 November.
(Accessed 10 September 2015).
Buskirk, T. D., & Andrus, C. (2012) Smart surveys for smart phones: Exploring
various approaches for conducting online mobile surveys via smartphones.
Survey Practice, 5, 1. Available online at:
http://www.surveypractice.org/index.php/SurveyPractice/article/view/63/html
(Accessed 10 September 2015).
Callegaro, M. (2013) Do You Know Which Device Your Respondent Has Used to
Take Your Online Survey? Survey Practice, 3, 6. Available online at:
http://surveypractice.org/index.php/SurveyPractice/article/view/250/html
(Accessed 10 September 2015).
Callegaro, M., & DiSogra, C. (2008) Computing response metrics for online panels.
Public Opinion Quarterly, 72, 5, pp. 1008-1032.
Confirmit (2013) The Confirmit Market Research Technology Report, available online
at
https://www10.confirmit.com/201405_Website_MR_TimMacerReport_Downloa
d.html (Accessed 10 September 2015).
Cook, C., Heath, F. & Thompson, R. L. (2000) A meta-analysis of response rates in
web-or internet-based surveys. Educational and Psychological Measurement,
60, 6, pp. 821-836.
Couper, M. P. (2013) Is the sky falling? New technology, changing media, and the
future of surveys. Keynote speech presented at 5th European Survey Research
Conference, 26 Ljubljana, 18 July 2013. Available online at
www.europeansurveyresearch.org/conference/couper. (Accessed 10
September 2015).
de Bruijne, M., & Wijnant, A. (2013a) Can Mobile Web Surveys Be Taken on
Computers? A Discussion on a Multi-Device Survey Design. Survey Practice, 6,
4. Available online at:
http://surveypractice.org/index.php/SurveyPractice/article/view/238. (Accessed
10 September 2015).
de Bruijne, M. & Wijnant, A. (2013b) Comparing survey results obtained via mobile
devices and computers: an experiment with a mobile web survey on a
heterogeneous group of mobile devices versus a computer-assisted web
survey. Social Science Computer Review, 31, pp. 483-505.
de Bruijne, M. & Wijnant, A. (2014a) Mobile response in web panels. Social Science
Computer Review, 32, 6, pp. 728-742.
P a g e | 11 of 22
de Bruijne, M. & Wijnant, A. (2014b) Improving response rates and questionnaire
design for mobile web surveys. Public Opinion Quarterly, 78, 4, pp. 951-962.
de Bruijne, M. & Oudejans, M. (2015) Online surveys and the burden of mobile
responding. Inj: Engel, U. Survey Measurements: Techniques, Data Quality and
Sources of Error, pp. 130-145. Frankfurt: Campus Verlag.
de Leeuw, E. & de Heer, W. (2002) Trends in Household Survey Nonresponse: A
Longitudinal and International Perspective. In Survey Nonresponse, ed. Robert
M. Groves, Don A. Dillman, John L. Eltinge, and Roderick J. A. Little, pp. 41–
54. New York: Wiley.
Dever, J. A., Rafferty, A., & Valliant, R. (2008, June). Internet surveys: Can statistical
adjustments eliminate coverage bias. Survey Research Methods (Vol. 2, No. 2,
pp. 47-62).
Dillman, D. A. (1978) Mail and telephone surveys (Vol. 3). New York: Wiley.
Dillman D. A. (1991) The design and administration of mail surveys. Annual Review
of Sociology, 17, pp. 225–249.
Dillman, D. A. (2000) Mail and internet surveys: The tailored design method (Vol. 2).
New York: Wiley.
Dillman, D. A. (2006) Procedures for conducting government-sponsored
establishment surveys: Comparisons of the Total Design Method (TDM), a
traditional cost-compensation model, and tailored design. Washington State
University, Washington.
Dillman, D. A. (2007) Mail and internet surveys (2nd ed.). New York: Wiley.
Dillman, D. A. (2011) Mail and Internet surveys: The tailored design method--2007
Update with new Internet, visual, and mixed-mode guide. Hoboken: John Wiley
& Sons.
Dillman, D. A., Smyth, J. D. & Christian, L. M. (2014) Internet, phone, mail, and
mixed-mode surveys: the tailored design method. Hoboken: John Wiley &
Sons.
ESOMAR and GRBN. (2015) Guideline for Online Sample Quality
https://www.esomar.org/uploads/public/knowledge-and-standards/codes-and-
guidelines/Online-Sample-Quality-Guideline_February-2015.pdf (Accessed 10
September 2015).
Fox, J. & Hong, J. (2009) Effect Displays in R for Multinomial and Proportional-Odds
Logit Models: Extensions to the effects Package. Journal of Statistical
Software, 32, 1, pp. 1-24.
Göritz, A.S., Reinhold, N. & Batinic, B. (2002) Online Panels. Online Social
Sciences, ed. Batinic, B., Reips, U., Bosnjak, M. & Werner, A. pp. 27–47.
Seattle: Hogrefe.
GreenBook Research Industry Trends Report, (2014) The Greenbook guide for
buyers of market research. Available online at http://www.greenbook.org/grit.
(Accessed 10 September 2015).
GreenBook Research Industry Trends Report, (2015) The Q3–Q4 2015 GreenBook
Research Industry Trends Report. Available online at
http://www.greenbook.org/grit. (Accessed 9 December 2015).
Groves, R. M. (2004) Survey errors and survey costs. Hoboken: John Wiley & Sons.
Groves, R. M. (2006) Nonresponse rates and nonresponse bias in household
surveys. Public Opinion Quarterly, 70, 5, pp. 646-675.
Groves, R. M., Fowler Jr, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., &
Tourangeau, R. (2011) Survey methodology. Hoboken: John Wiley & Sons.
P a g e | 12 of 22
Groves, R. M., & Couper, M. P. (2012). Nonresponse in household interview surveys.
New York: John Wiley & Sons, Inc.
Groves, R. M., & Kahn, R. L. (1979). Surveys by telephone: A national comparison
with personal interviews. Waltham: Academic Press, Inc.
Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on
nonresponse bias a meta-analysis. Public opinion quarterly, 72, 2, pp. 167-189.
Kaplowitz, M. D., Hadlock, T. D. & Levine, R. (2004) A comparison of web and mail
survey response rates. Public Opinion Quarterly, 68, 1, pp. 94-101.
Kinesis (2013) Updated with Q3 2013 data: online survey statistics from the mobile
future. Kinesis whitepaper. Available online at: www.kinesissurvey.com/wp-
content/uploads/2014/05/updated-with-Q3-2013-Data-Mobile-whitepaper.pdf
(Accessed 10 September 2015).
Knotice Mobile Email Opens Report (Qtr1-2 2012). Available online at
http://www.knotice
.com/reports/Knotice_Mobile_Email_Opens_Report_FirstHalf2012.pdf
(Accessed 10 September 2015).
Kreuter, F., Olson, K., Wagner, J., Yan, T., Ezzati-Rice, T. M., Casa-Cordero, Lemay,
M., Peytchev, A., Groves, R.M., & Raghunathan, T. E. (2010). Using proxy
measures and other correlates of survey outcomes to adjust for non-response:
examples from multiple surveys. Journal of the Royal Statistical Society: Series
A (Statistics in Society), 173(2), 389-407.
Liebe, U., Glenk, K., Oehlmann, M. & Meyerhoff, J. (2015) Does the use of mobile
devices (tablets and smartphones) affect survey quality and choice behaviour in
web surveys? Journal of Choice Modelling, 14, pp. 17-31.
Link, M.W. (2013) Measuring compliance in mobile longitudinal repeated-measures
design study. Survey Practice, 6, 4. Available online at:
www.surveypractice.org/index.php/ SurveyPractice/article/view/115/html
(Accessed 10 September 2015).
Link, M.W., Lai, J. & Bristol, K. (2013) Accessibility or simplicity? How respondents
engage with a multiportal (mobile, tablet, online) methodology for data
collection. Paper presented at the annual meeting of the American Association
for Public Opinion Research, Boston, Massachusetts, 16-19 May. (Accessed 10
September 2015).
Link, M.W., Murphy, J., Schober, M.F., Buskirk, T.D., Childs, J.H. & Tesfaye, C.L.
(2014) Mobile technologies for conducting, augmenting and potentially
replacing surveys: report of the AAPOR task force on emerging technologies in
public opinion research. Available online at:
www.aapor.org/AAPORKentico/AAPOR_Main/media/MainSiteFiles/REVISED_
Mobile_Technology_Report_Final_revisedl0Junel4.pdf (Accessed 10
September 2015).
Manfreda, K. L., Bosnjak, M., Berzelak, J., Haas, I., Vehovar, V., & Berzelak, N.
(2008). Web surveys versus other survey modes: A meta-analysis comparing
response rates. Journal of the Market Research Society, 50, 1, pp. 79.
Mavletova, A. (2013) Data quality in PC and mobile web surveys. Social Science
Computer Review, 31, 6, pp. 725-743.
Mavletova, A. & Couper, M.P. (2013) Sensitive topics in PC web and mobile web
surveys. Survey Research Methods, 7, 3, pp. 191-205.
Mavletova, A. & Couper, M.P. (2014) Mobile web survey design: scrolling versus
paging, SMS versus e-mail invitations. Journal of Survey Statistics and
Methodology, 2, 4, pp. 498-518.
P a g e | 13 of 22
McClain, C.A., Crawford, S.D. 8c Dugan, J.P. (2012) Use of mobile devices to
access computer-optimized web instruments: implications for respondent
behavior and data quality. Paper presented at the annual meeting of the
American Association for Public Opinion Research, Orlando, Florida, 17-20
May. (Accessed 10 September 2015).
McGeeney, K. & Marlar, J. (2013) Mobile browser web surveys: testing response
rates, data quality, and best practices. Paper presented at the annual meeting
of the American Association for Public Opinion Research, Boston,
Massachusetts, 16-19 May. (Accessed 10 September 2015).
Millar, M. M., & Dillman, D. A. (2012) Encouraging survey response via smartphones:
Effects on respondents’ use of mobile devices and survey response rates.
Survey Practice, 5, 3, pp. 1-6.
Murphy, L. (2014) Who is Winning the Game of Devices? Mobile vs. Desktop
http://www.greenbookblog.org/2014/09/17/who-is-winning-the-game-of-devices-
mobile-vs-desktop/ (Accessed 10 September 2015).
Peterson, G. (2012) Unintended mobile respondents. Paper presented at the annual
Council of American Survey Research Organizations Technology Conference,
New York, 30-31 May. (Accessed 10 September 2015).
Peytchev, A. & Hill, C. A. (2010) Experiments in mobile Web survey design
similarities to other modes and unique considerations. Social Science
Computer Review, 28, 3, pp. 319-335.
Schonlau, M., van Soest, A., & Kapteyn, A. (2007). Are 'Webographic' or Attitudinal
Questions Useful for Adjusting Estimates from Web Surveys Using Propensity
Scoring? RAND Working Paper Series No. WR-506. Available online at
http://www.rand.org/content/dam/rand/pubs/working_papers/2007/RAND_WR5
06.pdf (Accessed 10 November 2015)
Shih, T. H., & Fan, X. (2008). Comparing response rates from web and mail surveys:
A meta-analysis. Field methods, 20, 3, pp. 249-271.
Stapleton, C. (2011) The smart(phone) way to collect survey data. Paper presented
at the annual meeting of the American Association for Public Opinion Research,
Phoenix, Arizona, 12-15 May. (Accessed 10 September 2015).
SurveyMonkey Blog (2014) SurveyMonkey. Available at
https://www.surveymonkey.com/blog/2014/01/24/optimize-mobile-surveys/
(Accessed 10 September 2015).
Toepoel, V. & Lugtig, P. (2014) What happens if you offer a mobile option to your web
panel? Evidence from a probability-based panel of Internet users. Social
Science Computer Review, 32, 4, pp. 544-560.
Tortora, R. (2009) Attrition in consumer panels. Methodology of Longitudinal
Surveys, ed P. Lynn, pp. 235-249. Chichester: John Wiley and Sons Ltd.
Toninelli, D. (2014). Do online access panels really need to allow and adapt surveys
to mobile devices? http://www.upf.edu/survey/_pdf/RECSM_wp041.pdf
(Accessed 10 September 2015).
US Consumer Device Preference Report (Q1 2015) Available online at
http://info.movableink.com/Device-Report-Q1-2015-Download. (Accessed 10
September).
Vision Critical (2014) Research on research into mobile market research: three
paper review. Vision Critical University whitepaper. Available online at:
http://vcu.visioncritical.com/system/files/WP_Research_on_Research_into_Mo
bile_Market_Research.pdf (Accessed 10 September 2015).
P a g e | 14 of 22
Wang, M. & McCutcheon, A.L. (2014) Data quality among devices to complete web
surveys: comparing personal computers, smartphones, and tablets. Paper
presented at the annual meeting of the Midwest Association for Public Opinion
Research, Chicago, Illinois, 21-22 November. (Accessed 10 September 2015).
Wells, T. (2015) What market researchers should know about mobile surveys.
International Journal of Market Research, 57, 4, pp. 521.
Wells, T., Bailey, J. T. & Link, M. W. (2014) Comparison of smartphone and online
computer survey administration. Social Science Computer Review, 32, 2, pp.
238-255.
P a g e | 15 of 22
Figure 1: Overview of definition of data variables
P a g e | 16 of 22
Figure 2: Proportion of device types used to finish the survey (data set #1,
observational)
P a g e | 17 of 22
Figure 3: Device used to finish a survey (data set #2, observational)
P a g e | 18 of 22
Figure 4: Preferred web survey devices (data set #3, self-reported)
P a g e | 19 of 22
Figure 5: Average completion rates by device used, age group and gender (data set
#1, observational)
P a g e | 20 of 22
Table 1: Analysis of deviance for device use in dependence of socio-demographic
and panel-specific respondent characteristics
Data set #1 Data set #2 Data set #3
DF Δ Dev. p-value Δ Dev. p-value Δ Dev. p-value
Age groups 10 31466 < 0.001 42478 < 0.001 57 < 0.001
Gender 2 13895 < 0.001 10717 < 0.001 1 0.599
Interaction
between age
groups and
gender
10 1015 < 0.001 3030 < 0.001 12 0.306
Longevity on
panel 2
1172
< 0.001
8 0.015
Average
completion rate
2 15527 < 0.001 7 0.024
DF … degrees of freedom; Δ Dev. … difference in deviance between the model
omitting this factor and the full model where all higher-order interactions of this factor
are ignored.
P a g e | 21 of 22
Table 2: Analysis of deviance for completion rates in dependence of device used and
socio-demographic respondent characteristics (data set #1, observational).
DF Δ Dev p-value
Device used 2 54094 < 0.001
Age group 5 858 < 0.001
Gender 1 1831 < 0.001
Interaction between device used and age group 10 180 < 0.001
Interaction between device used and gender 2 54 < 0.001
Interaction between age group and gender 5 150 < 0.001
Interaction between device used, age group and gender 10 138 < 0.001
DF … degrees of freedom; Δ Dev. … difference in deviance between the model
omitting this factor and the full model where all higher-order interactions of this factor
are ignored.
P a g e | 22 of 22
... The best option is to follow the CATI example given in Table 3.12, which repeats the main question and all response options in all question items. List questions need to be shorter if viewed on a smartphone screen (Brosnan et al., 2017), with preferably no more than seven items. The use of smartphones creates other demands on questionnaire content and design. ...
... This is an advantage for younger respondents who are more likely than older respondents to use a smartphone to complete a questionnaire (Skeie et al., 2019), but there are two significant disadvantages. First, an analysis of millions of respondents to web surveys finds that the percentage of respondents who completed all questions in a questionnaire is higher for those who answered using a desktop or laptop (83%) versus those who answered on a tablet (66%) or smartphone (63%) (Brosnan et al., 2017). Second, the time required to answer a questionnaire on a smartphone is up to 40% longer than answering the same questionnaire on a desktop or laptop (Skeie et al., 2019;Toninelli and Revilla, 2020). ...
... Respondents are most likely to use the device on which the invitation letter was read to access the questionnaire and they are unlikely to change to a different device if they find that the process is too slow (Brosnan et al., 2017) or if the questionnaire is not optimized for their device. Unfortunately, you may not be able to control the device used to answer an online questionnaire. ...
... Ideally, an online administered survey should be viewable and completable irrespective of the device (smartphone, tablet, PC) (Brosnan et al. 2017). Preferences vary by demographic, primarily overall economic capacity and age, with handheld devices, in particular smartphones, being preferred by younger users (Brosnan et al. 2017). ...
... Ideally, an online administered survey should be viewable and completable irrespective of the device (smartphone, tablet, PC) (Brosnan et al. 2017). Preferences vary by demographic, primarily overall economic capacity and age, with handheld devices, in particular smartphones, being preferred by younger users (Brosnan et al. 2017). Studies have shown that when larger screen devices are used, completion times are shorter (Nissen and Janneck 2019), the PAR is lower (Nissen and Janneck 2019;Wenz 2017), and the accuracy of responses may be higher (Kato and Miura 2021). ...
Article
Full-text available
Participant attrition is a major concern for the validity of longer or complex surveys. Unlike paper-based surveys, which may be discarded even if partially completed, multi-page online surveys capture responses from all completed pages until the time of abandonment. This can result in different item response rates, with pages earlier in the sequence showing more completions than later pages. Using data from a multi-page online survey administered to cohorts recruited on Reddit, this paper analyses the pattern of attrition at various stages of the survey instrument and examines the effects of survey length, time investment, survey format and complexity, and survey delivery on participant attrition. The participant attrition rate (PAR) differed between cohorts, with cohorts drawn from Reddit showing a higher PAR than cohorts targeted by other means. Common to all was that the PAR was higher among younger respondents and among men. Changes in survey question design resulted in the greatest rise in PAR irrespective of age, gender or cohort.
... First, because of the coverage problem, caused by excluding non-users of the Internet or specific web services from the frame population (Bethlehem, 2010). Further concern refers to a notable nonresponse to online surveys (Daikeler, Bošnjak, & Lozar Manfreda, 2020), triggered by various factors such as survey type (Fricker, 2017), distribution channel (Bradley, 1999;Sakshaug, Vicari, & Couper, 2019) as well as a type of device used to complete the survey (Brosnan, Grün, & Dolnicar, 2017). Yet, the rapid development of information technology forces social scientists to take new sources of biases into account. ...
... For this study, the data is accessed and discussed as displayed on a 24-inch (4480 × 2520) retina display, and the viewport is determined as the first screen or stretch of content visible at a time at the display with this size specification. The screen variation is the most significant when web pages are accessed from different types of devices -PC, tablet, or phonealso, three key devices considered for encoding and representation the spatial location when designing for the Web (Brosnan et al., 2017). The variation is less significant for the PC-based uses because regardless of the display size, the data in the corpus follow a fluid web design, which means they are designed to adjust to the display and approximately the same for commonly used 21/24/27-inch displays. ...
Thesis
Full-text available
This thesis studies web homepages to understand the complex social practice of organizational identity communication on a digital medium. It examines how designs of web homepages realize discourses of identity through the mobilization and orchestration of various semiotic resources into multimodal ensembles, addressing critical organizational visual identity elements (‘logo,’ ‘corporate name,’ ‘color,’ ‘typography,’ ‘graphic shapes,’ and ‘images’), communicative content of the page, and navigation structures. By examining these three ‘strata’ of organizational identity communication, it investigates how a homepage uses formal design elements and more abstract principles of composition, such as spatial positioning and content ordering, as resources for making meaning. The data consists of three complementary sets drawn from thirty-nine web homepages of Australian university websites in 2020. Data set #1 includes four homepages for an in-depth study of organizational identity designs; data set #2 consists of 400 images from the ‘above the fold’ web area as the most strategic space on four homepages between the years 2015 and 2021; data set #3 is comprised of eight historical versions of a selected web homepage between the years 2000 and 2021, with three most representative designs for an in-depth investigation. Grounded in the discourse-analytic approach informed by multimodal social semiotics, the thesis adopts a mixed-method approach to data analysis. It applies multimodal discourse analysis combining the Genre and Multimodality model (Bateman, 2008; Bateman et al., 2017) to document the structural design patterns and social semiotic (metafunctional) approach to address the meaning potentials of the identified patterns; (Kress & van Leeuwen, 2021); content analysis (Bell, 2001; Rose, 2016) and visual social actor framework (van Leeuwen, 2008) to identify key representational tropes and visual personae. The study reveals the role of design as a mediating tool between the participants of discourse – the rhetor-institution/designer and envisaged audiences – and offers systematic insights into the uses of semiotic resources, both material (e.g., formal design elements and navigation structures) and nonmaterial (e.g., spatial considerations and content structuring), all contributing to the production of meanings and fostering identification with such meanings in the form of association with the university’s identity. Addressing the subtle differences and shifts in the form and function of key layout structures and strategies of viewer engagement, the study concludes that is plural – each university constantly revises semiotic choices and their multimodal composition to achieve specific rhetorical purposes. Together with several visual design choices, five identified strategies of viewer engagement – proximation, alignment, equalization, objectivation, and subjectivation – promote the university as a place of opportunity, achievement, sociality, and intellectual growth for a student as an individual and as a member of the community. The current research contributes to the emerging collaboration between multimodality, organization studies, and branding, recognizing the complexities and importance of multimodal communication in web-mediated texts amidst the critically increased roles of marketization and social presence in the current higher education landscape.
... Data sharing behaviour may also vary depending on the sampling method and sample composition (Brosnan et al., 2017;Elevelt et al., 2019;Jäckle et al., 2019;Keusch et al., 2019). For example, respondents from a cross-sectional, general population sample might be less likely to share additional data than respondents from special populations, such as participants of commercial online access panels, who might be more familiar with requests to share digital content. ...
Article
Full-text available
Combining surveys and digital trace data can enhance the analytic potential of both data types. We present two studies that examine factors influencing data sharing behaviour of survey respondents for different types of digital trace data: Facebook, Twitter, Spotify and health app data. Across those data types, we compared the relative impact of four factors on data sharing: data sharing method, respondent characteristics, sample composition and incentives. The results show that data sharing rates differ substantially across data types. Two particularly important factors predicting data sharing behaviour are the incentive size and data sharing method, which are both directly related to task difficulty and respondent burden. In sum, the paper reveals systematic variation in the willingness to share additional data which need to be considered in research designs linking surveys and digital traces.
... One area of focus has been comparative consumer behavior in offline environments (i.e., brick-and-mortar retail stores) and online environments, and the impact of one commerce channel upon the other. The main research questions pertain to whether consumers spend more time searching for products offline than online, conduct more purchases offline or online, and how experience of one medium (e.g., a desktop device or a mobile phone) influences their behavior when interacting with the other (Brosnan et al. 2017;Huang et al. 2017). ...
Article
Full-text available
Digital media have transformed the customer buying journey and recent studies show that different media (devices) are used for different steps of the decision-making process. In this study, we apply the Uses and Gratifications (U&G) theory in the marketing context in order to investigate consumers’ choice to use desktop or mobile devices for conducting purchases. Habit and social presence are tested as moderators of the relationship between intention to buy and purchase via the two media. We report results from two laboratory experiments involving an actual purchase in various product categories. Findings indicate that consumers use desktop to make significantly more purchases than via mobile phone. Further, the positive relationship between the intention to buy and product purchase is moderated by the habitual use of the medium. Purchase intention x habitual use of the medium interactions are related to purchase behavior when habit is strong. Similarly, the presence of other people while the purchase is being made via desktop and mobile devices increases the likelihood of product purchase. Several implications for further academic research and managers are discussed.
... An HTML-linked social media invitation and email was sent by the market research firm to the potential customers (total 1925) notifying them that they would be entitled to a free coupon. The social media campaign was comprised of an embedded URL link to the website hosting the survey, data was collected within a twomonth period and this campaign produced total 458 responses, representing an overall 23.8 percent response rate, which is within the tolerable limit (Brosnan et al., 2015). The survey instrument in the form of a questionnaire was administered in English, and respondents were from the national capital in India, New Delhi. ...
The study focuses on comparative effectiveness of two e-tail servicescape dimensions, e.g., product assortment and order fulfillment on consumers’ online purchase intentions for fashion apparel shopping. The mediating effect of shopping assistance and efficiency between e-tail servicescape dimensions and purchase intentions is examined. Additionally, the moderating influence of fulfillment reliability between e-tail servicescape dimensions and shopping assistance is also examined. The survey instrument was used to execute the study and data were gathered from 442 participants from the national capital of India. The hypothesized relationships were verified using covariance-based structural equation modelling (CB-SEM), hierarchical regression analytics (HRA), and bootstrap procedure. The findings reveal that there are certain e-tail value disposition oriented benefits in investing order fulfillment landscape over product assortment. The mediating role of shopping assistance and shopping efficiency is empirically verified and the moderating influence of fulfillment reliability is also confirmed.
Article
Much psychological research depends on participants’ diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection—paper/pencil, computer/web-based, and smartphone—affect participants’ diligence vs. “careless responding” tendencies and, thus, data quality? Results from prior studies suggest that different data collection modes produce a comparable prevalence of careless responding tendencies. However, as technology develops and data are collected with increasingly diversified populations, this question needs to be readdressed and taken further. The present research examined the effect of survey mode on careless responding in a repeated-measures design with data from three different samples. First, in a sample of working adults from China, we found that participants were slightly more careless when completing computer/web-based survey materials than in paper/pencil mode. Next, in a German student sample, participants were slightly more careless when completing the paper/pencil mode compared to the smartphone mode. Finally, in a sample of Chinese-speaking students, we found no difference between modes. Overall, in a meta-analysis of the findings, we found minimal difference between modes across cultures. Theoretical and practical implications are discussed.
Article
In the literature about web survey methodology, significant efforts have been made to understand the role of time-invariant factors (e.g. gender, education and marital status) in (non-)response mechanisms. Time-invariant factors alone, however, cannot account for most variations in (non-)responses, especially fluctuations of response rates over time. This observation inspires us to investigate the counterpart of time-invariant factors, namely time-varying factors and the potential role they play in web survey (non-)response. Specifically, we study the effects of time, weather and societal trends (derived from Google Trends data) on the daily (non-)response patterns of the 2016 and 2017 Dutch Health Surveys. Using discrete-time survival analysis, we find, among others, that weekends, holidays, pleasant weather, disease outbreaks and terrorism salience are associated with fewer responses. Furthermore, we show that using these variables alone achieves satisfactory prediction accuracy of both daily and cumulative response rates when the trained model is applied to future unseen data. This approach has the further benefit of requiring only non-personal contextual information and thus involving no privacy issues. We discuss the implications of the study for survey research and data collection.
Article
Full-text available
Probing questions, essentially open-ended comment boxes that are attached to a traditional closed-ended question, are increasingly used in online surveys. They give respondents an opportunity to share information that goes beyond what can be captured through standardized response categories. However, even when probes are non-mandatory, they can add to perceived response burden and incur a cost in the form of lower respondent cooperation. This paper seeks to measure this cost and reports on a survey experiment that was integrated into a short questionnaire on a German salary comparison site (N = 22,306). Respondents were randomly assigned to one of three conditions: a control without a probing question; a probe that was embedded directly into the closed-ended question; and a probe displayed on a subsequent page. For every meaningful comment gathered, the embedded design resulted in 0.1 break-offs and roughly 3.7 item missings for the closed-ended question. The paging design led to 0.2 additional break-offs for every open-ended answer it collected. Against expectations, smartphone users were more likely to provide meaningful (albeit shorter) open-ended answers than those using a PC or laptop. However, smartphone use also amplified the adverse effects of the probe on break-offs and item non-response to the closed-ended question. Despite documenting their hidden cost, this paper argues that the value of the additional information gathered by probes can make them worthwhile. In conclusion, it endorses the selective use of probes as a tool to better understand survey respondents.
Article
Full-text available
One question that arises when discussing the usefulness of web-based surveys is whether they gain the same response rates compared to other modes of collecting survey data. A common perception exists that, in general, web survey response rates are considerably lower. However, such unsystematic anecdotal evidence could be misleading and does not provide any useful quantitative estimate. Metaanalytic procedures synthesising controlled experimental mode comparisons could give accurate answers but, to the best of the authors' knowledge, such research syntheses have so far not been conducted. To overcome this gap, the authors have conducted a meta-analysis of 45 published and unpublished experimental comparisons between web and other survey modes. On average, web surveys yield an 11% lower response rate compared to other modes (the 95% confidence interval is confined by 15% and 6% to the disadvantage of the web mode). This response rate difference to the disadvantage of the web mode is systematically influenced by the sample recruitment base (a smaller difference for panel members as compared to one-time respondents), the solicitation mode chosen for web surveys (a greater difference for postal mail solicitation compared to email) and the number of contacts (the more contacts, the larger the difference in response rates between modes). No significant influence on response rate differences can be revealed for the type of mode web surveys are compared to, the type of target population, the type of sponsorship, whether or not incentives were offered, and the year the studies were conducted. Practical implications are discussed.
Article
Full-text available
The considerable growth in the number of smart mobile devices with a fast Internet connection provides new challenges for survey researchers. In this article, I compare the data quality between two survey modes: self-administered web surveys conducted via personal computer and those conducted via mobile phones. Data quality is compared based on five indicators: (a) completion rates, (b) response order effects, (c) social desirability, (d) non-substantive responses, and (e) length of open answers. I hypothesized that mobile web surveys would result in lower completion rates, stronger response order effects, and less elaborate answers to open-ended questions. No difference was expected in the level of reporting in sensitive items and in the rate of non-substantive responses. To test the assumptions, an experiment with two survey modes was conducted using a volunteer online access panel in Russia. As expected, mobile web was associated with a lower completion rate, shorter length of open answers, and similar level of socially undesirable and non-substantive responses. However, no stronger primacy effects in mobile web survey mode were found.
Article
Full-text available
This article reports from a pilot study that was conducted in a probability-based online panel in the Netherlands. Two parallel surveys were conducted: one in the traditional questionnaire layout of the panel and the other optimized for mobile completion with new software that uses a responsive design (optimizes the layout for the device chosen). The latter questionnaire was optimized for mobile completion, and respondents could choose whether they wanted to complete the survey on their mobile phone or on a regular desktop. Results show that a substantive number of respondents (57%) used their mobile phone for survey completion. No differences were found between mobile and desktop users with regard to break offs, item nonresponse, time to complete the survey, or response effects such as length of answers to an open-ended question and the number of responses in a check-all-that-apply question. A considerable number of respondents gave permission to record their GPS coordinates, which are helpful in defining where the survey was taken. Income, household size, and household composition were found to predict mobile completion. In addition, younger respondents, who typically form a hard-to-reach group, show higher mobile completion rates.
Article
This article investigates unintended mobile access to surveys in online, probability-based panels. We find that spontaneous tablet usage is drastically increasing in web surveys, while smartphone usage remains low. Further, we analyze the bias of respondent profiles using smartphones and tablets compared to those using computers, on the basis of several sociodemographic characteristics. Our results indicate not only that mobile web respondents differ from PC users but also that tablet users differ from smartphone users. While tablets are used for survey completion by working (young) adults, smartphones are used merely by the young. In addition, our results indicate that mobile web respondents are more progressive and describe themselves more often as pioneers or forerunners in adopting new technology, compared to PC respondents. We further discover that respondents’ preferences for devices to complete surveys are clearly in line with unintended mobile response. Finally, we present a similar analysis on intended mobile response in an experiment where smartphone users were requested to complete a mobile survey. Based on these findings, testing on tablets is strongly recommended in online surveys. If the goal is to reach young respondents, enabling surveys via smartphones should be considered.
Article
With the growing popularity of smartphones and tablet PCs (tablets) equipped with mobile browsers, the possibilities to administer surveys via mobile devices have expanded. To investigate the possible mode effect on answer behavior, results are compared between a mobile device–assisted web survey and a computer-assisted web survey. First, a premeasurement in the CentERpanel is conducted to analyze the user group of mobile devices. Second, the users are randomly allocated one of the three conditions: (1) conventional computer-assisted web survey, (2) hybrid version: a computer-assisted web survey with a layout similar to mobile web survey, and (3) mobile web survey. Special attention is given to the design of the mobile web questionnaire, taking small screen size, and typical functionalities for touchscreens into account. The findings suggest that survey completion on mobile devices need not lead to different results than on computers, but one should be prepared for a lower response rate and longer survey completion time. Further, the study offers considerations for researchers on survey satisfaction, location during survey completion, and preferred device to access Internet. With adaptations, surveys can be conducted on the newest mobile devices, although new challenges are emerging and further research is called for.
Article
The dramatic rise of smartphones has profound implications for survey research. Namely, can smartphones become a viable and comparable device for self-administered surveys? The current study is based on approximately 1,500 online U.S. panelists who were smartphone users and who were randomly assigned to the mobile app or online computer mode of a survey. Within the survey, we embedded several experiments that had been previously tested in other modes (mail, PC web, mobile web). First, we test whether responses in the mobile app survey are sensitive to particular experimental manipulations as they are in other modes. Second, we test whether responses collected in the mobile app survey are similar to those collected in the online computer survey. Our mobile survey experiments show that mobile survey responses are sensitive to the presentation of frequency scales and the size of open-ended text boxes, as are responses in other survey modes. Examining responses across modes, we find very limited evidence for mode effects between mobile app and PC web survey administrations. This may open the possibility for multimode (mobile and online computer) surveys, assuming that certain survey design recommendations for mobile surveys are used consistently in both modes.