ArticlePDF Available

Online versus offline research: Implications for evaluating digital media

Authors:
  • CIBER Research Ltd., Westwood Farm, Newbury, UK

Abstract

The growth of the Internet and other digital media has opened up exciting opportunities for the provision of public services, for business and for personal transactions. Comparisons between the earliest forms of “online” research, in the form of telephone interviewing, and offline data collection via face-to-face interviews or self-completion questionnaires, revealed that the modality within which research was conducted could affect research findings. In examining the evidence, this paper indicates that the use of online methodologies has important implications for sampling, response rates, quality of data produced, and operational practices in research projects. Online research is restricted to individuals with access to relevant technologies (e.g. the Internet) and where online technology penetration is limited, survey samples are unlikely to represent the general population. Online surveys, however, can produce quicker response rates than offline surveys and also richer open-ended responses. The important point is to recognise the strengths and weaknesses are associated with different methodologies and what differences can exist between online and offline data collection procedures.
Aslib Proceedings
Online versus offline research: implications for evaluating digital media
Barrie GunterDavid NicholasPaul HuntingtonPeter Williams
Article information:
To cite this document:
Barrie GunterDavid NicholasPaul HuntingtonPeter Williams, (2002),"Online versus offline research: implications for
evaluating digital media", Aslib Proceedings, Vol. 54 Iss 4 pp. 229 - 239
Permanent link to this document:
http://dx.doi.org/10.1108/00012530210443339
Downloaded on: 02 December 2014, At: 23:36 (PT)
References: this document contains references to 57 other documents.
To copy this document: permissions@emeraldinsight.com
The fulltext of this document has been downloaded 2705 times since 2006*
Users who downloaded this article also downloaded:
Joel R. Evans, Anil Mathur, (2005),"The value of online surveys", Internet Research, Vol. 15 Iss 2 pp. 195-219 http://
dx.doi.org/10.1108/10662240510590360
Marianne Horppu, Olli Kuivalainen, Anssi Tarkiainen, Hanna#Kaisa Ellonen, (2008),"Online satisfaction, trust and loyalty,
and the impact of the offline parent brand", Journal of Product & Brand Management, Vol. 17 Iss 6 pp. 403-413 http://
dx.doi.org/10.1108/10610420810904149
Hoseong Jeon, Beomjoon Choi, (2012),"The relationship between employee satisfaction and customer satisfaction", Journal
of Services Marketing, Vol. 26 Iss 5 pp. 332-341 http://dx.doi.org/10.1108/08876041211245236
Access to this document was granted through an Emerald subscription provided by 548388 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please
visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication
Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
Online versus offline
research: implications
for evaluating digital
media
Barrie Gunter, David Nicholas,
Paul Huntington and
Peter Williams
Introduction
The rapid expansion of the Internet and other
interactive communications technologies has
opened up significant new opportunities for
the provision of public services, conducting of
business and conveyance of personal
transactions among ordinary people (Brown
et al., 2001; Dutton, 1999). These
communications technologies have
widespread potential to change the way
institutions, corporations and individuals
conduct their day-to-day business (Dutton,
1996; Woolgar, 2000). The public sector has
exhibited a significant drive towards applying
information and communications
technologies to a growing range of services
(Dutton, 1999; Taylor et al., 1996).
Central government in Britain has
conceived of these new media as a conduit
through which citizens can gain close and
more direct contact with the political elite,
bringing in a new kind of democracy
(Coleman, 2000). This kind of development
has been illustrated by recent experiments
within the National Health Service in the
provision of health information and advisory
services online via the Web, touch-screen
kiosks and digital interactive television.
Information is provided via these systems on a
variety of health topics in text and picture
form. Users can also gain access to back-up
health advisory, diagnostic and treatment
services through these online channels
(Gunter et al., 2001; Nicholas et al., 2001a).
One particular development in the NHS is a
drive to ensure that patients are given better
opportunities to provide feedback on their
satisfaction with health services. There are
twin objectives to standardise patient
satisfaction measurement across Britain and
to move as much of this assessment as
possible online. In addition, online systems
are to be developed to enhance the integration
of such patient feedback with management
policies and decision-making processes. The
reality is that while some NHS Trusts
conduct patient satisfaction feedback, others
do not. Further, while some may conduct this
research online, others will continue to rely
principally on traditional offline survey
methodologies. With the promise of a roll-out
of other central and local government services
The authors
Barrie Gunter is Professor of Journalism Studies and Director
of Research, Department of Journalism Studies, University of
Sheffield, UK, David Nicholas is Professor and Head of
Department, Paul Huntington is Research Fellow, and
Peter Williams is Research Fellow, all in the Department of
Information Science, City University, London, UK.
Keywords
Communications technology, Internet, Electronic mail,
Online retrieval, Questionnaires, Surveys
Abstract
The growth of the Internet and other digital media has
opened up exciting opportunities for the provision of public
services, for business and for personal transactions.
Comparisons between the earliest forms of ``online''
research, in the form of telephone interviewing, and offline
data collection via face-to-face interviews or self-completion
questionnaires, revealed that the modality within which
research was conducted could affect research findings. In
examining the evidence, this paper indicates that the use of
online methodologies has important implications for
sampling, response rates, quality of data produced, and
operational practices in research projects. Online research is
restricted to individuals with access to relevant technologies
(e.g. the Internet) and where online technology penetration
is limited, survey samples are unlikely to represent the
general population. Online surveys, however, can produce
quicker response rates than offline surveys and also richer
open-ended responses. The important point is to recognise
the strengths and weaknesses are associated with different
methodologies and what differences can exist between
online and offline data collection procedures.
Electronic access
The research register for this journal is available at
http://www.emeraldinsight.com/researchregisters
The current issue and full text archive of this journal is
available at
http://www.emeraldinsight.com/0001-253X.htm
Received 10 December 2001
Accepted 28 January 2002
229
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
pp. 229±239
# MCB UP Limited
.
ISSN 0001-253X
DOI 10.1108/00012530210443339
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
online, it is important to consider the
implications this mixed research format
scenario might have for standardising online
user satisfaction measurement across different
suppliers. On the evidence of market research
experiments, in which the nature and quality
of data yielded by asking the same questions
in online versus offline environments were
tested, the implications could be significant.
Early online research
This paper reviews evidence based on
comparisons between online and offline
research. To begin with though, it is
important to be clear what is meant by
``online'' research here. In market research,
what we now term ``online'' research began
with telephone surveying in the 1970s. By the
middle of that decade computer-aided
telephone interviewing (CATI) systems were
introduced in which computers interact with
the telephone interview (Tyebjee, 1979).
Data were collected electronically by the
interviewer keying in the numerical codes for
the respondent's answers directly into a
computer database. This software
pre-determined question order, routing paths,
and response options, such that potential
interviewer error was reduced. It also speeded
up data processing because it circumvented
the need for a separate data entry stage. Next,
the 1980s saw the introduction of
computer-assisted techniques to accompany
face-to-face interviewing (computer-aided
personal interviewing or CAPI), whereby the
questions were placed in a computerised
format on a portable PC. Further
developments in telephone surveying saw the
use of totally automated telephone systems
with the initial call and subsequent interview
being conducted by a computer using a
digitised human voice. Respondents answer
by pressing touch-tone buttons (Havice and
Banks, 1992).
During the 1990s, the growth of Internet
technology opened up new data collection
possibilities whereby self-administered
questionnaires could also be
computer-mediated and hence placed in an
online environment. Under the computer-
assisted self-completion inventories (CASI)
system, respondents completed
questionnaires by themselves within a
computer-mediated framework. These
electronic forms of questionnaire
administration represent what is
predominantly thought of as ``online''
surveying these days. With the growing
popularity of the Internet, online surveys are
expected to become more popular than ever.
Indeed, according to some observers, online
research is the fastest growing development in
social research since the introduction of
scientific opinion polls in 1936 (Taylor,
2000).
Notwithstanding the significance of this
development, it is important to recognise that
online and offline research are not the same
(Dillman, 1999). This observation is
consistent with findings from earlier
mixed-mode surveys that compared data
yielded by self-completion paper
questionnaires, telephone and face-to-face
interviews (de Leeuw, 1992). In particular,
differences have been found in the results
produced by online and offline surveys. From
early on, electronic surveys were regarded as
being closest in type to a self-administered
paper questionnaire in that respondents work
through the questionnaire by themselves.
However, the interactive element of the
electronic survey, taking the form of
automated routing of respondents through a
self-completion questionnaire on the basis of
response options chosen, meant that it also
had something in common with telephone
and face-to-face interviews in which
respondents interacted with the source of
questions (Kiesler and Sproull, 1986). This
combination of features therefore gave the
electronic survey quite a different character
from other forms of offline data collection.
As with self-completion questionnaires, the
limited ``social presence'' of the online survey
might facilitate answers less influenced by
social desirability effects as compared with
telephone or face-to-face interviews. At the
same time, the interactive feature might lead
respondents to be more engaged than they
would be with standard self-completion
questionnaires. This might, in turn, lead
respondents to complete more items, make
fewer mistakes, give longer answers to
open-ended questions, and disclose more
about themselves.
The notion of ``social presence'' has been
used on previous occasions to analyse the
effectiveness of computer-mediated
communication (CMC). Within this
framework, CMC is compared with
230
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
face-to-face meetings between individuals
(Walther, 1992; Reeves and Nass, 1996).
Within a CMC context, users draw equivalent
perceptions about the communication
interaction, though in the case of
video-conferencing, telephone, or e-mail
links, the availability of verbal and non-verbal
cues may diminish compared with the
face-to-face situations, thus rendering the
quality or richness of the communication
poorer as a result (Rice, 1993).
Even in the context of telephone
interviewing, the automated calling
technology has been found to create an
interview largely free of social context effects
as compared with a standard human
telephone interview. In an automated
interview, respondents were more open and
honest and willing to answer negatively
(Havice and Banks, 1992).
Types of online survey
Before examining some direct comparisons
between online and offline surveys, it is worth
noting that online research can take different
forms. In comparisons of online and offline
research, it has been rare for more than one
type of online method to be examined within
a single study.
The simplest online research method
requires that the researcher embed the
questions in an e-mail that is sent to a
potential respondent. The recipient of the
e-mail is typically told that he/she may
respond to the survey by hitting the ``reply
command'' and by ``including the original
message in the response''. Then the recipient
can scroll down to the survey questions and
answer them in the space provided. The
major advantage of this method is its
simplicity: it is easy for the researcher to form
and send the survey, and it is a simple task for
the respondent to answer and return it. This
method, however, limits the researcher's
ability to formulate sophisticated
questionnaires. That is, the researcher must
use a flat text that cannot be in bold, in
colour, or altered in size, and graphics cannot
be used. The researcher must be especially
careful when positioning the response spaces
for fixed-alternative questions since the
respondent's answers can affect the alignment
of other items on the survey. Pre-testing of the
embedded e-mail instrument is necessary to
ensure that the questions remain properly
aligned after transmission and to make sure
that the respondent's answers do not
negatively affect the structure of the
questionnaire.
Another way of conducting an online survey
is to send the questionnaire as an attachment
in an e-mail. With this approach, the
respondent is asked to download the attached
questionnaire to a word processor on his/her
personal computer for attention. Once the
questions have been answered, the
respondent can save the file and close the
application. The respondent can return the
questionnaire by attaching the document to
an e-mail reply. The main advantage of this
method is that the questionnaire ± through
the use of font changes, bold text, check boxes
and possibly graphics ± can be given a format
and appearance that will be inviting and
pleasing to the respondent. This approach,
however, has several drawbacks.
First, it involves multiple steps for the
respondent to retrieve, complete, and return
the survey. Some potential respondents may
find these steps too demanding. Second, it
assumes that the respondent has the ability
± both in terms of software and knowledge ± to
retrieve and send e-mail attachments. Third,
if the potential respondent fears getting a
virus from downloaded files, he/she may be
reluctant to participate in the survey. Fourth,
the method assumes that the respondent has a
word processor that can ``read'' the file that
has been attached to the e-mail. Fortunately,
most word processors can ``read'' files created
by a different word processor, but if the
potential respondent has an outdated or
unsophisticated word processor, it may be
impossible for his/her software to ``read'' a
foreign file.
A third way to conduct an online survey is
to send an e-mail with an attachment that
contains a survey program. That is, when the
potential respondent downloads this file and
executes it, a survey program can be activated
that has as much creativity in it as desired by
the researcher. The program can allow for
complex skip patterns and can incorporate
``live'' graphics and audio features. This
approach to online surveying requires
sophisticated programming expertise and is
more costly than the other methods.
Moreover, it assumes that the respondent has
the equipment and ability to download and
run the program. This method, therefore,
231
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
appears to be more problematic than the
e-mail attachment option.
A fourth method is to use a Web-based
survey. With this method, e-mails are
typically sent to potential respondents inviting
them to go to a Web address and to complete
the questionnaire there. This method allows
for elaborately designed questionnaires with
colour, graphics, audio features and
sophisticated skip patterns. Moreover, the
software is usually programmed automatically
to collect and provide ongoing summaries of
the data. Again, this type of survey places a
demand on respondents to access the site
where questions can be found and requires a
degree of competence in engaging with the
Internet. Further, potential respondents who
have access only to e-mail, and not to the
World Wide Web, will not be able to respond
to this form of survey. This would not matter,
of course, if it were only Web users who were
being canvassed.
Differences between online and offline
surveys
Online surveys are more than the simple
transference of a standard self-completion
questionnaire or interview schedule into an
electronic environment. Online research has
distinctive qualities and these alone may be
sufficient to yield quite different results from
essentially the same research conducted in an
offline environment. The key differences
between online and offline research has been
found to relate to sampling of respondents,
project design and operational procedures,
response rates, and the quality of data
obtained. These issues will be examined
below.
Sampling issues
One concern with online data collection has
focused on respondent sampling. Online
interviews or self-administered questionnaire
distribution are restricted to individuals who
have access to the Internet. Hence, data
collected online are not representative of the
general population ± most of whom still do
not have Internet access. One reason why
telephone surveying did not take off until the
early 1970s was the problem of missing those
who were not yet connected.
Even in the USA, which has the most
advanced Internet market in the world, the
Internet population does not have the same
demographic profile as the general
population. The online population comprises
individuals with higher educational levels,
higher incomes, who are younger, live
predominantly in urban areas, and mostly in
dual-parent families and are of white or
Asian-Pacific descent (US Department of
Commerce, 1999). Thus, those who have a
lower education, lower income, are older, live
in predominantly rural areas, in one-parent
families and are of black or Hispanic descent
tend to be under-represented (Palmquist and
Stueve, 1996; Dommeyer and Moriarty,
2000; US Department of Commerce, 1999;
White, 2000).
The value and acceptance of online data
therefore varies with the market information
needs of client organisations and the types of
markets within which they operate. Thus,
businesses whose products or services are
aimed at target markets known to have
widespread Internet access can consider using
online research in the knowledge that such
markets are well represented among the
Internet-connected population. The use of
online research on company intranets can also
provide a convenient and effective way to
conduct internal corporate surveys among
employees. However, for research questions
that seek the responses of people in general,
online surveys run the risk of failing to reach
representative samples.
Representative samples are less important
in the context of interpretative research where
purposive sampling of special groups is the
objective. In this instance, generalisation of
findings to the greater population may not be
as important as gaining an understanding of
how certain types of people respond to
particular questions and the ways they
articulate their answers. Indeed, as later
evidence will show, online questioning can
sometimes yield richer responses than offline
methods.
In one cross-modality study, Cobanoglu
et al. (2001) compared a standard postal
survey approach with fax and Web-based
self-completion surveys. They found that
ordinary postal surveys gave better coverage
because the general public and even members
of organisations and company employees did
not always have a fax machine or e-mail
address. According to these authors, another
complicating factor was that people change
their addresses from time to time. While they
232
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
may be in the habit of leaving a forwarding
address for standard postal delivery, that
might be valid for up to 12 months, this
behaviour is not so often followed in relation
to e-mail. Nonetheless, the technology exists
to facilitate the forwarding of electronic post.
With time, as e-mail becomes a standard that
replaces so-called ``snail-mail'', users will no
doubt arrange for the automatic forwarding of
electronic messages just as they would
ordinary post.
Operational and design differences
The starting point for the launch of any new
research, regardless of the platform on which
it is administered, requires researchers to give
full consideration to such basic issues as the
key questions that are being addressed, the
required sample, and the kinds of data that
need to be generated. In addition then, there
are other considerations that need to be borne
in mind when conducting research online.
Online surveys carry different design and
operational implications from offline surveys.
Postal surveys are easy to construct, but it
can take many hours to prepare them for
distribution. Copying, labelling, folding,
stuffing the envelopes and preparing the
return envelopes (with stamps or business
reply) takes considerable labour and financial
resources. Fax surveys, if conducted with a
modem-capable computer and mail-merge
program such as Microsoft Word, can be
prepared in minutes and the transmission of
faxes can even be done automatically. With
Web-based surveys, the initial set-up takes
some time, although the transmission of
thousands of survey questionnaires can be
done in minutes. Respondents usually
complete and transmit the answers
immediately while they have the e-mail open
on their computer and can easily retrieve it. In
addition, Web-based surveys are coded
automatically as respondents enter their
answers, while mail and fax surveys have to be
coded manually.
A further feature of online surveying, using
electronic distribution systems, is the ability
to present questionnaires in multi-media
formats. Text can be accompanied by images
that can serve as prompts or stimulus
materials. Of course, it is possible to use
high-quality images and complex graphs with
postal as well as Web-based surveys. With the
latter, however, audio, video and animated
graphics can also be included to clarify
questions or illustrate response options
(Bachman et al., 1999).
Response rates
Response rates in e-mail surveys are often
poorer than for offline surveys (Dommeyer
and Moriarty, 2000; Cobanoglu et al., 2001),
though this is not invariably the case
(Opperman, 1995; Parker, 1992). Repeated
contacts with respondents and use of
reminders can push up response rates for
e-mail surveys (Mehta and Sivadas, 1995).
Overall though, the evidence on response
rates is far from consistent.
Kiesler and Sproull (1986) compared
response patterns to a paper questionnaire
and an electronic questionnaire. They found
that returns were lower for the electronic
questionnaire (67 per cent) than for the paper
questionnaire (75 per cent). However, the
e-survey was returned more quickly on
average than the paper survey (9.6 days versus
10.8 days). Respondents who answered the
e-survey made fewer mistakes and answered
more items. They also gave richer answers to
open-ended questions. There was no
difference between the two types of survey in
the distribution of responses to attitude
scales. There was evidence, though, of weaker
social desirability effects in answers given in
the e-survey. In other words, the answers were
likely to have been less influenced by the
desire to please or to be seen in a good light.
Further research confirmed that electronic
surveys often fail to attain the response rates
of paper questionnaire surveys (e.g. Couper
et al., 1997). One exception to this finding
was a study by Parker (1992) that obtained a
higher response rate for an e-survey (63 per
cent versus 38 per cent) by a significant
margin for reasons to be described later. Tse
et al. (1995) surveyed a university population
and obtained a higher response rate to mail
(27 per cent) than e-mail (6 per cent).
Bachman et al. (1996) surveyed business
school deans and obtained a higher response
rate to mail (65.6 per cent) than e-mail (52.5
per cent). The e-mail response, however, was
a lot quicker than the mail response (4.68
days versus 11.18 days). When Bachman et al .
(1999) re-surveyed business school deans,
e-mail (46 per cent) produced the better
response than standard mail (19.1 per cent).
Schaefer and Dillmann (1998) found little
difference between standard mail (57.5 per
cent) and e-mail (58 per cent) surveys in their
233
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
overall response rate, but the e-mail survey
yielded fewer unanswered questions on
average. Respondents replied more quickly to
the e-survey than the standard mail survey.
Dommeyer and Moriarty (2000) reviewed
12 studies, over half of which indicated a
response rate to an e-mail survey that trailed
behind that of another survey by at least eight
percentage points and as much as 48
percentage points. Three studies showed no
significant differences in response rates
(Mehta and Sivadas, 1995; Schaefer and
Dillman, 1998; Weible and Wallace, 1998).
Weible and Wallace (1998) compared mail,
fax, e-mail and Website questionnaire
formats. Although there were no overall
differences between these four conditions in
response rates, electronic surveys yielded
quicker responses (fax ± 8.8 days, e-mail ± 6.1
days, Web form ± 7.4 days) than did mail
(12.9 days). Two studies indicated superior
results with an e-mail survey (Parker, 1992;
Opperman, 1995). Both of these studies,
however, had methodological flaws.
Parker (1992), for example, sent e-mail
surveys to those in a company who had an
e-mail account. Those who did not have an
e-mail account were sent a survey by
``company pouch''. The ``company pouch''
group yielded a lower response rate (38 per
cent versus 68 per cent), and Parker
attributed this result to the low image that
employees have of documents sent through
the company mail system. However, since the
respondents were not randomly assigned to
the treatments, the differences in response
could also have been due to group differences.
The other study that found superior results
with an e-mail survey was conducted by
Opperman (1995). It is not surprising he got
superior results with his e-mail survey since he
used a follow-up mailing with it while
apparently not using the same procedure with
his standard postal survey. Moreover, his
postal survey was conducted in an earlier time
period and it is not clear whether it covered
the same topic as his e-mail survey.
A number of factors could explain relatively
low response rates of e-mail surveys. First,
most of these are conducted so that the
respondent's identity must be revealed. The
lack of anonymity may deter some people
from responding. Second, an e-mail survey is
very easy to ignore and discard. All the
respondent must do is hit the ``delete'' button.
Third, since e-mail is a relatively new survey
medium, some potential respondents may be
confused about how to return the survey.
Bachmann etal. (1996) found in a pre-test of
their e-mail survey that many survey
respondents did not understand how to return
their answers. Fourth, since e-mail surveys
cannot use pre-paid incentives or cause the
impression that a lot of expense and effort has
gone into the survey, it is less able than other
survey methods to create respondent
dissonance or to make the respondent feel
obligated to respond. Fifth, the flat text and
constrained format of an embedded e-mail
survey may make some e-mail recipients
negatively disposed towards responding (see
Carroll, 1994).
Experience with mail, telephone and
face-to-face interview surveying has shown
that the most powerful determinant of
response rates is the number of attempts
made to contact a respondent (Dillman et al.,
1974; Goyder, 1985, 1987; Heberlein and
Baumgartner, 1978). Evidence from e-survey
research has indicated that multiple contacts
can push up response rates here, too (Mehta
and Sivadas, 1995; Smith, 1997). Smith
(1997) achieved a 5 per cent higher response
rate with e-mail using multiple contacts, while
Mehta and Sivadas (1995) pushed their
response rate in this way by 20 per cent.
Personalising questionnaires is another
feature that can increase response rates in
mail surveys (Dillman, 1978, 1991, 1999).
A personalised letter addressed to a specific
individual shows the respondent that he or
she is important. This technique can be
applied to e-mail as well.
Quality of responding
Differences have been found in the results
produced by online and offline surveys. While
electronic surveys require respondents to
work through the questionnaire by themselves
as with a standard self-completion paper
instrument, the interactive element of the
electronic survey meant that it also had
something in common with telephone and
face-to-face interviews in which respondents
interact with the source of questions (Kiesler
and Sproull, 1986). These features therefore
gave the electronic survey quite a different
character from offline data collection. The
social context of online responding might
mean that answers are less influenced by
social desirability effects than those given in
telephone and face-to-face interviews. At the
234
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
same time, the interactive feature might lead
respondents to be more engaged than they
would be with standard self-completion
questionnaires. This might, in turn, lead
respondents to complete more items, make
fewer mistakes, give longer answers to
open-ended questions, and disclose more
about themselves (Brown et al., 2001).
There is a growing body of evidence that
online surveys produce higher response
quality than some offline methodologies
(e.g. self-completion postal surveys). Online
questioning results in fewer items being
omitted by respondents (Kiesler and Sproull,
1986; Schaefer and Dillman, 1998; Sproull,
1986). Fewer mistakes tend to be found in
online surveys compared with offline
self-completion methods, because
respondents are guided more closely through
the questionnaire (Kiesler and Sproull, 1986).
The biggest difference between online and
offline methods, however, is that electronic
surveys produce richer responses to
open-ended questions (Mehta and Sivadas,
1995; Bachmann et al., 1996; Comley, 1997;
Schaefer and Dillman, 1998). Responses to
open-ended questions in online surveys tend
to be longer and more revealing than those
generated in standard postal, self-completion
surveys (Taylor, 2000).
Online research may be more effective in
addressing sensitive issues. Respondents seem
to be more willing to reveal information about
their experiences with sensitive conditions
(i.e. anxiety disorders, ovarian cancer,
incontinence, erectile dysfunction) in online
surveys. Moon (1998) found that consumers
revealed a great deal of personal information
when completing a computer-mediated
survey.
In general, computer-assisted
self-administered questionnaires have been
found to reduce the tendency to engage in
socially desirable responding compared to
interviewer administered surveys, though they
are no better at avoiding this than
self-completed paper questionnaires in this
respect (Finnegan and Allen, 1994; Martin
and Nagao, 1989; Rosenfeld et al., 1993).
Online qualitative research
Online technology has provided new
opportunities for qualitative researchers, too.
Some initial efforts have centred on
replicating traditional focus group interviews
with real-time online focus groups. Others
have involved special Internet ``chat rooms''
to hold what are, in effect, continuing
conversations with individuals and groups
who could not easily be brought together at
one time or in one place. Yet another
approach has been to conduct focus group-
style interviews via group support systems
(GSS), whereby a small group of respondents
answer questions using sophisticated software
on workstations that are connected by a local
area network (Lewis, 1987).
While there are concerns about the
representativeness of online survey samples,
this is less problematic in relation to
qualitative research where purposive sampling
tends more often to be applied. The aim of
qualitative approaches is to understand how
individuals make sense of the world around
them, but not necessarily to establish whether
such perceptions are normative. Nonetheless,
proponents have argued that it is possible
more readily to convene geographically-
dispersed participants through online
qualitative research than with standard focus
group methodology (Savage, 2001).
Initially contacting informants for online
data collection is substantially different from
what consumer researchers have been able to
achieve with more conventional data
collection techniques. Traditionally, market
research companies have made unsolicited
contacts with prospective research
participants. In cyber-space this is called
``spamming'' and sometimes results in
``backlash'' or ``flaming'', a profusion of
caustic replies to the sender (Frost, 1998;
James, 2000). Different methods must be
used to invite consumers to participate in
research.
As already noted, sampling frames are also
problematic in online research because people
change their e-mail providers or e-mail
addresses much more frequently than typical
respondents change their mailing addresses.
Some researchers have found high
undeliverable rates when using e-mail
addresses (Opperman, 1995; Frost, 1998;
Dommeyer and Moriarty, 2000). However,
this may be due to potential respondents
employing anti ``spam'' or junk filters on their
mail applications. During recent work on the
new NHS interactive Website, the present
authors discovered that at least one
respondent who had submitted a health
235
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
information query to the new service failed to
receive her response for this reason. Her
e-mail software did not recognise the NHS
response e-mail address and deleted the
message on her behalf (Nicholas et al.,
2001b).
Once the initial contact barriers have been
overcome, online research provides a number
of distinct advantages to researchers. As
noted, answers to open-ended questions may
be much richer in an online context than in an
offline context (Mehta and Sivadas, 1995;
Comley, 1997; Schaefer and Dillman, 1998;
Dommeyer and Moriarty, 2000; Taylor,
2000). Computer-mediation creates an
environment in which respondents appear to
be more willing to disclose personal
information about themselves (Moon, 1998).
This observation in quantitative surveys
clearly has important implications for the
potential of online platforms in conducting
qualitative research where all responding is
open-ended.
So far, online focus groups have received
mixed reviews (Curasi, 2001; Savage, 2001).
The commercial market research community,
for instance, is divided over the use of online
focus groups. Some practitioners feel that
important non-verbal cues are missed online
but would be observed during face-to-face
interviews. Online focus groups are also
criticised because the online group dynamics,
one of the greatest advantages of focus
groups, occurs very differently in a
computer-mediated environment. Some
online focus groups have had disappointing
results, with participants typing in one- or
two-word responses to the questions asked or
to other informant comments (White, 2000).
Other researchers argue that online focus
groups have their advantages, including the
fact that people with online access are a good
population to query for companies selling
products and services over the Internet.
Costing about one-fifth to half the cost of
traditional focus groups, online focus groups
are sometimes recommended to pre-test more
traditional focus groups.
Within a GSS framework, direct
comparisons with traditional focus groups
have shown that the online methodology can
serve as a useful alternative to its offline
counterpart (Soutar et al., 1996; Sweeney
et al., 1996). With the GSS approach, a group
interview is run by a facilitator assisted by a
technical ``chauffeur''. Participants receive
open-ended questions on their workstations
and input their answers in open text form.
These responses are collated centrally by the
chauffeur, whose screen is also visible to
participants through an overhead projection
system. The facilitator also has sight of all this
material and prompts participants with a
series of questions, ensuring as far as possible
that all members of the group contribute. As
with an offline focus group, part of the
facilitator's job is to ensure that the group
discussion stays on track and does not go off
on irrelevant tangents. This procedure has
been found to produce successful focus group
research in which all respondents participate
and where the emergence of a dominant
respondent is unlikely to occur. This online
qualitative methodology can generate a
wealth of information from respondents and
because the ``discussion'' is conducted in a
written text format, the ``results'' in their raw
form can be produced immediately at the end
of the session (Sweeney et al., 1997).
Curasi (2001) compared depth interviews
collected online with depth interviews
conducted face-to-face. The online interview
dataset included some of the strongest and
some of the weakest interviews in the
investigation. Under some conditions, online
depth interviews can provide a useful
complement to the traditional face-to-face
interview. With online focused interviews,
there is no need for transcription of
interviews, because respondents provide their
contributions in written form (Tse, 1998).
Follow-up probes can take place throughout
the entire research process and not just at the
time of the interview itself. Additional
questions that come to mind after the
interview can be e-mailed to respondents.
Respondents who participate online may
allow for easier member checks, a technique
of testing hypotheses, data, preliminary
categories and interpretations with
informants. This method is believed to be one
of the single most crucial techniques for
establishing credibility (Guba and Lincoln,
1989; Wallendorf and Belk, 1989). Online
research designs allow for greater
triangulation of methods or approaches
± which can yield more robust data.
In conducting online research, researchers
need to distinguish between committed and
less committed informants. Those who are
motivated may yield rich data, while those
who are not will yield very little. The latter
236
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
may not be worth chasing up later. Personal
contact ± face-to-face or on the telephone
± with potential respondents is better than
e-mail for initial approaches. Then, when the
initial questions are sent, interviewees should
be informed that the more detailed their
answers the better. Follow-up questions
require additional time to complete the
interview, therefore additional time should be
allotted for computer-mediated interviews.
Accuracy of online research
One test of the credibility of any new data
collection method hinges on its ability not just
to describe phenomena, but more especially
to predict behaviour reliably and accurately.
Taylor (2000) reported on a test study with
online surveying in the context of the 1998
US gubernatorial and senatorial elections.
The winner was correctly projected in 21 out
of 22 races (95 per cent). In comparison, 49
out of 52 (94 per cent) telephone polls that
were published in Hotline or posted on the
CNN Website for these same races made
correct projections, statistically an
insignificant difference in success rate. Of
course, ``per cent correct'' is a crude,
potentially misleading indicator of the
accuracy of election forecasts. It is also
important to look at the accuracy of the
election forecasts in terms of votes cast for
candidates.
In computing the degree of error in
predicting the election results, online surveys
performed comparably with telephone polls.
Taylor (2000) reported further comparisons
between online (Internet) surveys and
telephone surveys in the measurement of
political attitudes and behaviour. With
weighting of data, online surveys and
telephone polls were found to perform equally
well for much of the time. The use of
demographic weighting alone was sufficient to
bring most online variables very close to the
replies in the parallel telephone survey.
Further weighting for voting propensity
added little to the accuracy of these surveys.
Conclusion
Online research has been a growth
phenomenon in academic and commercial
contexts. It appears as a natural progression
given the widespread penetration of
computer-mediated communications
technologies and increasing use of the
Internet and other interactive
communications systems by government,
business and private citizens. There is no
doubt that online research is a convenient way
to collect data from populations that have
adopted these technologies and utilise them
on a regular basis. Electronic surveys can
facilitate the speedy distribution of
questionnaires to large numbers of potential
respondents and the rapid processing of data.
In the online environment, automatic
question routing and pre-determination of
question and response option presentation
formats reduce the demand on interviewers or
respondents and, in turn, preclude many
sources of response error. All this represents a
positive development.
Online research is not the same as offline
research, however. The decision to use online
research must take into account the
availability of respondent samples and the
type of data required. Any attempts to make
direct comparisons between online and offline
surveys in multi-mode research studies should
be aware of and take into account the
differences in nature of responding and
quality of data each type of research can
produce ± even when essentially the same
questions are being asked.
References
Bachman, D., Elfrink, J. and Vazzana, G. (1996), ``Tracking
the progress of e-mail versus snail-mail'',
Marketing
Research
, Vol. 8 No. 3, pp. 31-5.
Bachman, D., Elfrink, J. and Vazzana, G. (1999), ``E-mail
and snail-mail face off in rematch'',
Marketing
Research
, Vol. 11 No. 4, pp. 10-15.
Brown, J., Culkin, N. and Fletcher, J. (2001), ``Human
factors in business-to-business research over the
Internet'',
International Journal of Market Research
,
Vol. 43 No. 4, pp. 425-40.
Carroll, S. (1994), ``Questionnaire design affects response
rate'',
Marketing News
, Vol. 28 No. 1, p. 14.
Cobanoglu, C., Warde, B. and Moreo, P.J. (2001), ``A
comparison of mail, fax and web-based survey
methods'',
International Journal of Market Research
,
Vol. 43 No. 4, pp. 441-52.
Coleman, S. (2000), ``The new media and democratic
politics'',
New Media & Society
, Vol. 1 No. 1,
pp. 67-74.
Comley, P. (1997),
The Use of the Internet as a Data
Collection Method
, available at: www.sga.co.uk.
esomar.html, pp. 1-7.
237
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
Couper, P., Blair, J. and Triplett, T. (1997),
A Comparison
of Mail and E-mail for a Survey of Employees in
Federal Statistical Agencies
, paper presented at the
annual conference of the American Association for
Public Opinion Research, Norfolk, VA.
Curasi, C.F. (2001), ``A critical exploration of face to face
interviewing vs. computer-mediated interviewing'',
International Journal of Market Research
, Vol. 43
No. 4, pp. 361-75.
de Leeuw, E.D. (1992),
Data Quality in Mail, Telephone
and Face-to-Face Surveys
, TT Publications,
Amsterdam.
Dillman, D.A. (1978),
Mail and Telephone Surveys: The
Total Design Method
, Wiley-Interscience, New York,
NY.
Dillman, D.A. (1991), ``The design and administration of
mail surveys'',
Annual Review of Sociology
, Vol. 17
No. 2, pp. 225-49.
Dillman, D.A. (1999),
Mail and Internet Surveys: The
Tailored Design Method
, 2nd ed., John Wiley, New
York, NY.
Dillman, D.A., Christenson, J.A., Carpenter, E.H. and
Brooks, R. (1974), ``Increasing mail questionnaire
response: a four-state comparison'',
American
Sociological Review
, Vol. 39, pp. 744-56.
Dommeyer, C.J. and Moriarty, E. (2000), ``Comparing two
forms of an e-mail survey: embedded vs. attached'',
International Journal of Market Research
, Vol. 42
No. 1, pp. 39-50.
Dutton, W.H. (Ed.) (1996),
Information and
Communication Technologies ± Visions and
Realities
, Oxford University Press, Oxford.
Dutton, W.H. (1999),
Society on the Line
, Oxford
University Press, Oxford.
Finnegan, J.E. and Allen, N.J. (1994), ``Computerised and
written questionnaires: are they equivalent?'',
Computers in Human Behaviour
, Vol. 10 No. 3,
pp. 484-96.
Frost, F. (1998), ``Electronic surveys: new methods of
primary data collection'', in Anderson, P. (Ed.),
Proceedings of the 27th EMAC Conference,
Marketing Research
, EMAC, Stockholm.
Goyder, J.C. (1985), ``Face to face interviews and mail
questionnaire: the net difference in response rate'',
Public Opinion Quarterly
, Vol. 49 No. 2, pp. 234-52.
Goyder, J.C. (1987),
The Silent Majority: Nonrespondents
on Sample Surveys
, Westview Press, Boulder, CO.
Guba, E.G. and Lincoln, Y.S. (1989),
Fourth Generation
Evaluation
, Sage, Thousand Oaks, CA.
Gunter, B., Nicholas, D., Williams, P. and Huntington, P.
(2001), ``Is TV good for you?'',
The Library
Association Record
, Vol. 103 No. 9, pp. 557-8.
Havice, M.J. and Banks, M.J. (1992), ``Live and automated
telephone surveys: a comparison of human
interviewers and an automated technique'',
Journal
of the Market Research Society
, Vol. 33 No. 2,
pp. 91-102.
Heberlein, T.A. and Baumgartner, R. (1978), ``Factors
affecting response rates to mailed questionnaires:
a quantitative analysis of the published literature'',
American Sociological Review
, Vol. 43 No. 2,
pp. 447-62, available at: www.usc.edu/dept/
annenberg/vol3/issue1
James, D. (2000), ``The future of online research'',
Marketing News
, 3 January, pp. 164-70.
Kiesler, S. and Sproull, L. (1986), ``Response effects in the
electronic survey'',
Public Opinion Quarterly
, Vol. 50
No. 3, pp. 402-13.
Lewis, F.L. (1987), ``A decision support system for
face-to-face groups'',
Journal of Information
Science
, Vol. 13 No. 2, pp. 211-19.
Martin, C.L. and Nagao, D.H. (1989), ``Some effects of
computerized interviewing on job applicant
responses'',
Journal of Applied Psychology
, Vol. 74
No. 1, pp. 72-80.
Mehta, R. and Sivadas, E. (1995), ``Comparing response
rates and response content in mail versus electronic
mail surveys'',
Journal of the Market Research
Society
, Vol. 37, pp. 429-39.
Moon, Y. (1998), ``Impression management in
computer-based interviews: the effects of input
modality, output modality and distance'',
Public
Opinion Quarterly
, Vol. 62 No. 4, pp. 610-22.
Nicholas, D., Huntington, P. and Williams, P. (2001a),
``When titans clash: digital health information
providers and the health service square up to each
other'',
Managing Information
, Vol. 8 No. 3,
pp. 50-7.
Nicholas, D., Huntington, P. and Williams, P. (2001b),
NHS
Direct Online Interactive Enquiry Service Evaluation
of the Pilot Stage
, unpublished report submitted to
NHS Direct Online operational team, City University,
London.
Opperman, M. (1995), ``E-mail surveys: potentials and
pitfalls'',
Marketing Research
, Vol. 7 No. 1,
pp. 29-33.
Palmquist, J. and Stueve, A. (1996), ``Stay plugged into
new opportunities'',
Market Research
, Vol. 8 No. 1,
pp. 13-15.
Parker, L. (1992), ``Collecting data the e-mail way'',
Training and Development
, July, pp. 52-4.
Reeves, B. and Nass, C. (1996),
The Media Equation
,
Cambridge University Press, New York, NY.
Rice, R. (1993), ``Media appropriateness: using social
presence theory to compare traditional and new
organisational media'',
Human Communication
Research
, Vol. 19 No. 4, pp. 451-84.
Rosenfeld, P., Booth-Kewley, S. and Edwards, J.E. (1993),
``Computer-administered surveys in organisational
settings'',
American Behavioural Scientist
, Vol. 36
No. 4, pp. 485-511.
Savage, M. (2001), ``Online qual debate ensues'',
Research
, December, Issue 427, p. 12.
Schaefer, D.R. and Dillman, D.A. (1998), ``Development of
a standard e-mail methodology: results of an
experiment'',
Public Opinion Quarterly
, Vol. 62 No. 3,
pp. 378-97.
Smith, C.B. (1997), ``Casting the net: surveying an Internet
population'',
Journal of Communication Mediated
by Coumputers
, Vol. 3, pp. 1-12.
Soutar, G.N., Whiteley, A.M. and Callan, J.L. (1996),
``Group support systems: an alternative to focus
groups'',
Australian Journal of Market Research
,
Vol. 4 No. 1, pp. 35-46.
Sproull, L.S. (1986), ``Using electronic mail for data
collection in organisational research'',
Academy of
Management Journal
, Vol. 29 No. 2, pp. 159-69.
Sweeney, J.C., Soutar, G.N., Hausknecht, D.R., Dallin, R.F.
and Johnson, L.W. (1997), ``Collecting information
from groups: a comparison of two methods'',
238
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
Journal of the Market Research Society
, Vol. 39
No. 2, pp. 397-411.
Sweeney, J.C., Soutar, G.N., Whiteley, A.M. and Johnson,
L.W. (1996), "Generating consumption value items:
a parallel interviewing process approach",
Second
South-East Asia-Pacific Association for Consumer
Research Conference
, 12-13 July, Perth.
Taylor, H. (2000), ``Does Internet research work?
Comparing online survey results with telephone
survey'',
International Journal of Market Research
,
Vol. 42 No. 1, pp. 51-63.
Taylor, J., Bellamy, C., Raab, C., Dutton, W.H. and Peltu,
M. (1996), ``Innovation in public service delivery'', in
Dutton, W.H. (Ed.),
Information and Communication
Technologies: Visions and Realities
, Oxford
University Press, Oxford, pp. 265-82.
Tse, A.C. (1998), ``Comparing the response rate, response
speed, and response quality of two methods of
sending questionnaires: e-mail vs. mail'',
Journal of
the Market Research Society
, Vol. 40 No. 4,
pp. 353-61.
Tse, A.C., Tse, K.C., Yin, C.H., Ting, C.B., Yi, T.W., Yee,
K.P. and Hong, W.C. (1995), ``Comparing two
methods of sending out questionnaires: e-mail
versus mail'',
Journal of the Market Research
Society
, Vol. 37 No. 2, pp. 441-6.
Tyebjee, T.T. (1979), ``Telephone survey methods: the
state of the art'',
Journal of Marketing
, Vol. 43 No. 2,
pp. 68-78.
US Department of Commerce (1999),
Falling Through the
Net: Defining the Digital Divide
, National
Telecommunications and Information
Administration, 27 pp., available at: www.ntia.doc.
gov/ntiahome/fttn99
Wallendorf, M. and Belk, R. (1989),
Interpretive Consumer
Research
, Association for Consumer Research, Salt
Lake City, UT.
Walther, J.B. (1992), ``Interpersonal effects in
computer-mediated interaction: a relational
perspective'',
Communication Research
, Vol. 19
No. 1, pp. 52-90.
Weible, R. and Wallace, J. (1998), ``The impact of the
Internet on data collection'',
Marketing Research
,
Vol. 10 No. 3, pp. 19-27.
White, E. (2000), ``Market research on the Internet has its
drawbacks'',
The Wall Street Journa
l, 2 March, B3.
Woolgar, S. (2000), ``Virtual Society?'',
Profile 2000
, Said
Business School, University of Oxford, Oxford.
239
Online versus offline research
Barrie Gunter, David Nicholas, Paul Huntington and Peter Williams
Aslib Proceedings
Volume 54
.
Number 4
.
2002
.
229±239
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
This article has been cited by:
1. Niall Piercy. 2014. Online service quality: Content and process of analysis. Journal of Marketing Management 30:7-8, 747-785.
[CrossRef]
2. Niall Piercy, Colin Campbell, Daniel Heinrich. 2012. Suboptimal segmentation: Assessing the use of demographics in
financial services advertising. Journal of Financial Services Marketing 16:3-4, 173-182. [CrossRef]
3. Roediger Voss, Thorsten Gruber, Alexander Reppel. 2010. Which classroom service encounters make students happy or
unhappy?. International Journal of Educational Management 24:7, 615-636. [Abstract] [Full Text] [PDF]
4. Rodoula H. Tsiotsou, Vanessa Ratten, Nicola Greaves, Heather Skinner. 2010. The importance of destination image analysis
to UK rural tourism. Marketing Intelligence & Planning 28:4, 486-507. [Abstract] [Full Text] [PDF]
5. Tony Garry, T.C. Melewar, Len Tiu Wright, Thorsten Gruber, Stephan C. Henneberg, Bahar Ashnai, Peter Naudé,
Alexander Reppel. 2010. Complaint resolution management expectations in an asymmetric business‐to‐business context.
Journal of Business & Industrial Marketing 25:5, 360-371. [Abstract] [Full Text] [PDF]
6. Bernd Kupka, André M. Everett, Stephen G. Atkins, Marion Mertesacker, Lynne Walters, Tim Walters, Andrea Graf,
L. Brooks Hill, Carley Dodd, Jürgen Bolten. 2009. The intercultural communication motivation scale: An instrument to
assess motivational training needs of candidates for international assignments. Human Resource Management 48:5, 717-744.
[CrossRef]
7. Stephan C. Henneberg, Thorsten Gruber, Alexander Reppel, Bahar Ashnai, Peter Naudé. 2009. Complaint management
expectations: An online laddering analysis of small versus large firms. Industrial Marketing Management 38:6, 584-598.
[CrossRef]
8. Simon Pinnegar, Robert Freestone. 2007. Identifying Australian sites of urban planning heritage: Results from a national
web‐based survey. Australian Planner 44:4, 36-43. [CrossRef]
9. Kunsoon Park, Mahmood A. Khan. 2006. An Investigation of Factors Influencing Participation in Online Surveys by College
Students. Journal of Hospitality & Tourism Education 18:4, 9-16. [CrossRef]
10. Susan Hart, Gillian Hogg, Madhumita Banerjee. 2004. Does the level of experience have an effect on CRM programs?
Exploratory research findings. Industrial Marketing Management 33:6, 549-560. [CrossRef]
11. Thorsten Gruber, Alexander E. Reppel, Isabelle Szmigin, Rödiger VossDesigning Online Laddering Studies 193-215.
[CrossRef]
Downloaded by Northumbria University At 23:36 02 December 2014 (PT)
... The online and offline data collection modes offer significant research platforms [100]. We found the differences in responses because online research is not the same as offline research [101]. The online responses are poorer quality than offline surveys [102,103] due to the restriction of individuals with access to required technologies, e.g., internet, computers or smartphones. ...
... We neither developed any hypothesis on a comparative basis nor applied both collected samples individually. Thus, these differences in responses (online and offline) did not affect the outcomes of the results [101]. ...
Article
Full-text available
This study investigated Environmental Sustainability (ES) and Environmental Performance (EP) through the direct and indirect use of Organizational Environmental Culture (OEC). This study focused on top managers, namely, the CEOs and directors of SMEs, along with their middle managers. In this study, the researchers employed green HRM and Green Innovation (GI) as mediators. We applied a quantitative approach that utilized cross-sectional data collected from Saudi Arabian Small and Medium-sized Enterprises (SMEs). We used a survey questionnaire with a convenience sampling technique and succeeded in obtaining replies from 236 respondents. By using the Structural Equation Model (SEM), this study’s findings demonstrate that OEC has a positive and significant effect on green HRM and GI. This study’s findings support the development of policies that promote ES and EP through green environmental practices. Further, green HRM and GI are significant predictors of ES and EP. This study’s findings also show that green HRM and GI have a mediating effect in developing the associations between OEC and ES and EP. Ultimately, this study’s findings make a significant contribution to the depth of the empirical evidence about SMEs in the context of Saudi Arabia.
... The Internet allows a researcher wanting to survey hard-to-reach populations by reaching such individuals via groups, forums, and even targeted advertisements (Wright, 2005). NDErs may find an online questionnaire more comfortable as it can retain anonymity and they can answer, or not answer, however they may like (Gunter, Nicholas, Huntington, & Williams, 2002). Furthermore, questionnaires can reduce experimenter bias/effects (Denissen et al., 2010;Gunter et al., 2002). ...
... NDErs may find an online questionnaire more comfortable as it can retain anonymity and they can answer, or not answer, however they may like (Gunter, Nicholas, Huntington, & Williams, 2002). Furthermore, questionnaires can reduce experimenter bias/effects (Denissen et al., 2010;Gunter et al., 2002). ...
Thesis
Full-text available
This thesis aims to identify challenging aftereffects of near-death experiences (NDEs), to explore how these are lived by near-death experiencers (NDErs), and to study the impacts of these challenging aftereffects on psychological wellbeing. This thesis also aims to identify what aids integration of these aftereffects, particularly so that when NDErs come to mental health professionals for help, these professionals have a framework with which to work. Per a review of the literature, there has been research on certain aspects of NDEs in relation to wellbeing, such as satisfaction with life or post-traumatic growth, but not as looking at factors that make up psychological wellbeing as a whole. Furthermore, the literature review identified only two studies which mapped challenging aftereffects with limited information on how the data were analyzed. Thus, a mixed-method study was developed to identify challenging NDE aftereffects and examine further the impact of these on wellbeing. A questionnaire utilizing the NDE Scale, multiple choice questions measuring wellbeing outcomes, and open response questions to further describe how challenges were experienced by participants was employed. The quantitative analysis discovered that the deeper the NDE, particularly if the NDE had a transcendental component, the more someone reports positive long-term changes in mood. It also identified that the more an NDEr reports positive changes in one’s current sense of happiness and life satisfaction, the more one reports ongoing positive changes in their perception of life’s purpose, social relationships, and mood. The analysis also presented the finding that people who had their NDE when they were teenagers or children report more struggles socially than compared to people who had their NDEs as adults. The thematic content analysis conducted on the written answers from the questionnaire illuminated the variety of psychological changes following an NDE and categorized them as negative, neutral, and positive depending on how the participants presented them. However, the thematic content analysis also showcased how even if changes are viewed positively, this does not negate the fact they could still be challenging to accommodate. For instance, the majority of participants discussed how discovering their life’s purpose through their NDE was a positive thing but trying to live their life’s purpose was often a struggle, particularly when, for example, they could not easily change jobs without sacrificing financial stability for their family. Interviews to further illuminate challenges experienced by the participants were conducted and analyzed via interpretative phenomenological analysis. The analysis showcased key themes while presenting and respecting the subtle nuances of each interviewee’s personal experience. Each theme had at least two subthemes: Relationship with Reality – “life is temporary; we are forever,” and, “life is an assignment/has purpose;” Relationship with NDE/Its Aftereffects – “community/sharing the experience,” and “time to comprehend the/live with it;” Relationship with Self – “strong sense of responsibility for/of Self,” and, “pursued integration/development;” and lastly, Relationship with Other People – “being compassionate with boundaries,” “family/friend support,” and “loneliness/hard to relate with other people.” These themes/subthemes were then placed within the framework of the Six-Factor Model of psychological wellbeing as a way to gauge how certain aftereffects impact wellbeing. This thesis is the first research to map challenges caused by NDEs using a multi-method approach involving statistical analysis, thematic analysis, content analysis, and interview examined via interpretative phenomenological analysis. It is also the first to frame these challenges within a wellbeing model. The findings of this thesis have pragmatic uses, particularly for mental health professionals when working with NDErs. It adds to the clinical as well as the parapsychological, thanatological, and health literature.
... Primary data was collected by using online self-administered questionnaires. Gunter et al. (2002) argued as online questionnaires basically are answered while no interviewer present, the pressure to provide social desire responses is reduced. As a result, by eliminating the interviewer's presence, both participant and interviewer bias were removed. ...
Article
Full-text available
Food quality (FQ) has been recognised as one of the fundamental factors that affects customer satisfaction in restaurant operations, yet research on buffet restaurants is still underdeveloped. This study focused on diners’ perceptions of buffet restaurants in Vietnam to (1) determine the most important dimensions of FQ, (2) identify the relationship between FQ dimensions and customer satisfaction, and (3) identify the relationship between the overall FQ and customer satisfaction. A total of 143 valid responses to a self-administered online survey were obtained. This study found that according to these respondents, food freshness is the most important variable followed by food taste, food presentation, menu variety, food temperature, and healthy food options. The results of regression and correlation analysis reveal that among the six dimensions of FQ measured, food temperature is the only attribute that is significantly related to customer satisfaction. Nevertheless, the overall FQ significantly contributed to customer satisfaction. It implied that restauranteurs operating this style of restaurant, should not only pay attention to FQ but especially food temperature, which is often overlooked in other forms of restaurants.
... Although online surveys are becoming increasingly popular because of their cost-effectiveness, I decided to conduct my questionnaire offline (Gunter et al., 2002). I sent my questionnaire to the respondents with the help of acquaintances and also asked them to collect the completed questionnaires later and send them back to me. ...
Thesis
Full-text available
With my questionnaire study, I analyzed the effects of the coronavirus epidemic on people’s lives in four European countries, primarily on their income trends. I was able to identify certain factors influencing income and their national and demographic differences. I have shown how my work fits into a segment of the literature that has been rarely analyzed so far. I hope that the results obtained and the new information gained from them can help not only to deal with the current crisis, but also to resolve similar future situations.
... Azért éreztük az online módszerű lekérdezést kényszernek, mert a szakirodalom egyértelműen az offline lekérdezések válaszarányát tartja magasabbnak, illetve torzító hatását alacsonyabbnak (Dillman & Smyth, 2014;Gunter et al., 2002). Kérdőívünkben kutatási kérdéseinken és hipotéziseinken kívül más kérdéseket is vizsgáltunk, elsősorban abból a célból, hogy egy későbbi kutatás számára is adatok álljanak rendelkezésünkre, ám ezen kérdésekre ebben a tanulmányban nem térünk ki. ...
Article
Tanulmányunk célja a COVID-19-járvány miatti korlátozások hatásának vizsgálata. Célcsoportunkat a hazai felsőoktatásban tanuló hallgatók jelentik, akiknek megszokott életét a járvány megelőzése érdekében meghozott korlátozó intézkedések egyik napról a másikra gyökeresen megváltoztatták. A szükség miatt sietve bevezetett online oktatás otthonmaradásra, személyes kapcsolataiktól való fizikai távolságtartásra kényszerítette a hallgatókat. Ennek a helyzetnek a lelki hatásaira fókuszálunk kutatásunkban, amit online kérdőíves módszerrel végeztünk. Az adatok statisztikai feldolgozását követően elemeztük a kapott eredményeket, majd levontuk következtetéseinket.
... One of the most striking features of online questionnaire services is their ability to recruit very large sample sizes at a relatively modest expense online as compared to traditional survey research methods (Brewer & Hunter, 2006). It has been observed that online surveys are generally returned quicker, have 'richer' responses to open-ended questions, and that their reach is potentially global (Gunter et al., 2002). Furthermore, it is easy to send respondents a reminder e-mail asking them to complete the questionnaire, an action that Moss and Hendry argue increases response rates (Moss & Hendry, 2002, p. 586). ...
Article
Full-text available
This study determines the methods for improving recruitment of Muslim American women in mental health research. Studying this minority population in more depth will reduce their suffering from mental illness. A 40-item survey, along with cover letter, was hosted on the Stanford University website and sent via email to organizations known to have large Muslim American women populations. Although approximately 200–300 responses were hoped for, an unexpected total of 1279 women completed the survey within days. The effectiveness of this survey was attributed to multiple factors: ease of an online survey, privacy afforded through an anonymous survey, trust in the PI, the survey being hosted by a reputable university and understanding the importance of mental health research. It is important to continue improving methods to recruit the minority Muslim American women population for studies.
... In contrast, an internet-based survey is restricted participants with internet access, tending to lead to a bias selection and making it challenging to achieve representativeness of the investigated sample (external validity). 16 The COVID-19 pandemic does not affect everyone equally, especially in countries with high levels of social inequality. 17 Understanding how the sociodemographic profile has impact behaviors is relevant to public health authorities, which can propose strategic actions to mitigate other pandemic adverse effects. ...
Article
Full-text available
The pandemic of the new coronavirus (COVID-19) may be affecting the physical activity (PA) level in much of the population. This study aimed to investigate the prevalence of physical inactivity and sedentary behavior (SB) among adults with chronic diseases and their associations with sociodemographic factors during the COVID-19 pandemic. This cross-sectional study included 249 participants (age: 18–91 years; 61.4% female) with chronic conditions and attended the Family Health Strategy program in a small town in Brazil. Data were collected between 2020-07-13 and 2020-07-24 by face-to-face interviews. Self-reported PA, sitting time, chronic diseases, medication use, sociodemographic data, and self-isolation adherence were obtained by questionnaire. During this specific time point of the COVID-19 pandemic, 71.5% of participants did not meet the PA recommendations (≥500 METs-min/week), and the prevalence of SB risk (≥4 hours sitting) was 62.7%. Adjusted logistic regression indicated that male participants (odds ratio [OR]: 1.89 [95% CI 1.02–3.53]), living alone (OR: 2.92 [95% CI 1.03–8.30]) or in a two-person household (OR: 2.32 [95% CI 1.16–4.63]), and those who reported sometimes performing self-isolation (OR: 3.07 [95% CI 1.47–6.40]) were more likely to meet the minimum PA recommendations. Current smokers had a lower odds (OR: 0.36 [95% CI 0.14–0.95]) of meeting the PA recommendations. Older participants (OR: 2.18 [95% CI 1.06–4.50]) and those who had multimorbidity (OR: 1.92 [95% CI 1.07–3.44]) were more likely to have a higher degree of SB. There is an urgent need to mitigate physical inactivity and SB, and public health interventions must take into account sociodemographic status.
Article
Full-text available
Background: The popularity of online learning has increased tremendously in response to the needs of students amid outbreaks of emerging infectious diseases. Few studies have concentrated on the learner's perspectives involved with the transition from traditional to online learning. The aim of this study was to assess students' attitudes towards online learning as well as the perceived preparedness and barriers. Methods: A descriptive, cross-sectional, correlational web-based survey design was used to recruit eligible participants from five Jordanian government universities. A Facebook-based campaign and snowball sampling approach were used to recruit potential survey participants. Results: The results show that 1,210 medical college students decided to take part in this online survey. Students' attitudes and perceived preparedness for online learning were moderate, while perceived barriers were high. This study revealed a connection between students' attitudes toward online learning and their gender, major, living area, college level, and prior experience. The main obstacles to online learning were an unstable Internet connection, a lack of motivation, and a lack of instructions. Conclusion: The majority of students had mixed feelings about online learning and were largely supportive of conventional classroom learning. Students were pessimistic about their chances of learning professional skills and core competencies online. More research is required to determine whether students are ready and able to make greater use of online education in order to access high-quality learning opportunities.
Article
Full-text available
The importance of the role of Information in the Supply Chain and the Position of Transportation and Logistics as Economic drivers of Countries, has led to the integration of Information and Communication Technology and Transportation and the formation of Intelligent Transportation Systems (ITS). The effectiveness of freight and passenger transportation Systems depends on the optimization and correlation of ITS with other systems, including Business Intelligence Systems. The main purpose of this study is to develop a conceptual model to investigate the effect of business intelligence factors on the maturity of this system, which ultimately leads to the effective use of intelligent transportation systems. In order to test the hypotheses, a questionnaire based on literature the subject was designed and distributed among managers and experts of eight government organizations in the country's transportation sector. The results of analyzing the data collected from the sample size, which is based on the Structural Equation Modeling approach, showed that the studied organizations have a suitable level of maturity to implement an intelligent business system. Also, the quality of information content and access to it, flexibility, alignment and integration with other systems positively and significantly affect the success of business intelligence. These results also emphasize the importance of paying attention to technological and organizational business intelligence capabilities.
Article
The importance of the role of Information in the Supply Chain and the Position of Transportation and Logistics as Economic drivers of Countries, has led to the integration of Information and Communication Technology and Transportation and the formation of Intelligent Transportation Systems (ITS). The effectiveness of freight and passenger transportation Systems depends on the optimization and correlation of ITS with other systems, including Business Intelligence Systems. The main purpose of this study is to develop a conceptual model to investigate the effect of business intelligence factors on the maturity of this system, which ultimately leads to the effective use of intelligent transportation systems. In order to test the hypotheses, a questionnaire based on literature the subject was designed and distributed among managers and experts of eight government organizations in the country's transportation sector. The results of analyzing the data collected from the sample size, which is based on the Structural Equation Modeling approach, showed that the studied organizations have a suitable level of maturity to implement an intelligent business system. Also, the quality of information content and access to it, flexibility, alignment and integration with other systems positively and significantly affect the success of business intelligence. These results also emphasize the importance of paying attention to technological and organizational business intelligence capabilities.
Article
Full-text available
Since the early 1990s, the internet has dominated the attention of the media, academics and business organisations. It has the potential of being a revolutionary way to collect primary and secondary data, although much more research is needed to learn how to better harness its strengths. This project compares depth interviews collected online with depth interviews conducted face-to-face. Advantages and disadvantages are highlighted, as well as suggested strategies for successfully collecting online data. Major points are illustrated using data from a project in which both data collection techniques are employed. The online interview dataset included some of the strongest and some of the weakest interviews in the investigation. This paper argues that under some conditions online depth interviews can provide a useful complement to the traditional face-to-face interview. Sampling frame problems of - nonrepresentativeness, endemic in quantitative online data collection, is not problematic if the researcher is conducting an interpretive investigation. When the researcher's goal is not to quantify or generalise but instead to better understand a particular population, online data collection can complement other datasets, allow data triangulation and strengthen the trustworthiness of the findings.
Article
Full-text available
This paper explores the problems and challenges surrounding the conduct of research via the internet among business audiences and highlights the great potential that exists for business-to-business research over the net. It also identifies some of the main obstacles to researching in this way, and examines the factors which cause them.
Article
We examined the effects of computerized interviewing on applicant responses within the context of a laboratory simulation in which subjects were interviewed for either a low- or high-status position (clerk or management trainee) under one of four interview conditions: computerized, paper-and-pencil, or face-to-face with a warm or a cold behaving interviewer. The results indicated that subjects in nonsocial (computer or paper-and-pencil) interview conditions both scored lower on the Marlowe-Crowne measure of socially desirable responding (SDR) and reported their grade point averages and scholastic aptitude scores more accurately (with less inflation) than those in the face-to-face interview conditions. However, the use of nonsocial screening interviews for the high-status position engendered significantly higher levels of applicant resentment about the interview, relative to the conditions in which the interview procedure was appropriate (or more than appropriate) for the position level. This unintended behavioral consequence suggests one of the bounds that may influence the effectiveness of computerized interviewing. Contrary to expectations, we did not find the interpersonal style of the interviewer to significantly affect applicant resentment or SDR.