Content uploaded by John Olson
Author content
All content in this area was uploaded by John Olson on Nov 05, 2014
Content may be subject to copyright.
4 Decision Line, July 2001
RESEARCH ISSUES
■■
■■
■ SHAWNEE VICKERY, Feature Editor, Eli Broad Graduate School of Management, Michigan State University
Electronic Surveys: Advantages
and Disadvantages Over
Traditional Print Surveys
Kenneth K. Boyer, Michigan State University,
John R. Olson, DePaul University,
and Eric C. Jackson, Michigan State University
T
he world is in the midst of an elec-
tronic communications revolution.
While many people have adopted
electronic communication extensively, oth-
ers are very slow to change. As a result,
the potential for a discontinuity between
people who communicate via traditional
methods and those who use electronic
media has developed. The rapid evolution
of computer hardware and software has
provided a catalyst for businesses to rede-
sign their products and processes. In a simi-
lar manner this technological revolution
has given researchers the ability to design
and collect survey data in new ways. Com-
puter-based and Web-based surveys have
been developed that make the electronic
collection of data easier than ever. How-
ever, there are numerous uncertainties re-
garding issues such as the willingness of
respondents to fill out a computerized or
Web–based survey, the relative accuracy
and reliability of responses, and the best
methods of applying these data collection
techniques.
As part of a study designed to exam-
ine various aspects of online retailing of
office supply products, we designed a mini-
experiment to compare two data collection
methods—a four-page, printed survey sent
out by regular mail and a computerized
version of the same survey sent to partici-
pants on a computer disk. This article ex-
amines the potential advantages and
disadvantages of administering a survey
via computer and reports our experiences
and findings.
Advantages of Computer Surveys
There are numerous advantages of a com-
puter-administered survey, many of which
are due to the greater ability to present or
record information. Questions can be writ-
ten with more complete descriptions be-
cause a computer survey is not
space-constrained, as with a printed one.
For example, a question that refers to a
specific technology or business practice as-
sumes that the respondent is familiar with
the technology or practice. However, this
assumption is clearly not always true and a
computerized survey can be set up to ei-
ther provide a definition automatically or
as a pop-up feature in much the same way
as a link in a Website. The recent problems
with the 2000 presidential election in Florida
illustrate that even a simple survey or bal-
lot can run into difficulties. Electronic sur-
veys offer new methods of controlling for
dimpled chads, pregnant chads, and hang-
ing chads—but only if they are carefully
designed to emphasize ease of use for all
users—as the paper Florida ballots were
not! Experts have lobbied for electronic sur-
veys as a way of simplifying and securing
elections for years (Cranor and Cytron,
1997). Perhaps now they will find a more
receptive audience.
Another potential advantage of elec-
tronic surveys is the ability to include pic-
tures, special formatting, audio or video
links along with straight text. Researchers
can use these features to emphasize or
draw attention to critical aspects of a ques-
tion or ask a new type of question. It can
be very helpful to include multimedia fea-
tures for a couple of reasons. First, mul-
tiple formats can help clarify the questions
being asked—a picture or video clip can
add substance to a written description. In a
similar fashion, the multimedia capability
of the electronic surveys allows research-
ers to ask for specific responses to audio or
Ken Boyer
is an associate professor in
marketing and supply chain
management at Michigan
State University. He earned
a B.S. in mechanical engi-
neering from Brown Univer-
sity, and a M.A. and Ph.D.
in business administration
from Ohio State University. Dr. Boyer’s research
interests focus on the strategic management of
operations, electronic commerce and the effective
use of advanced manufacturing technologies. He
has published articles in Management Science,
Decision Sciences, Journal of Operations
Management, and Business Horizons, among
others. His research has won the 1997 Chan
Hahn award and the 1996 Stan Hardy award.
He is a member of the Academy of Management,
Decision Sciences Institute and the Society of
Manufacturing Engineers.
boyerk@bus.msu.edu
John Olson
is an assistant professor in
the Department of Manage-
ment at DePaul University.
Dr. Olson has earned a B.S.
in mathematics and econom-
ics from the University of
Minnesota, M.B.A. from St.
Cloud State University, and
a Ph.D. from the University of Nebraska. Dr.
Olson’s research interests focus on the effective
management of information in the supply chain
including e-commerce applications, EDI systems
and Just-in-Time systems. He has published ar-
ticles in Interfaces, Journal of Manufactur-
ing Systems and co-authored a book in
Just-in-Time management. He is a member of
the Academy of Management, Decision Sciences
Institute and INFORMS.
Eric Jackson
is in his third year as a doc-
toral candidate in operations
research at Michigan State
University. His principal
interests are in the applica-
tions of Complex systems to
quality concerns in business.
He has degrees in chemistry
and chemical engineering from the University
of Michigan and worked as the technical director
for a specialty chemical company before entering
the Ph.D. program at Michigan State.
Decision Line, July 2001 5
visual questions. Second, and perhaps more
importantly, the fundamental problem
with collecting data of any kind with a sur-
vey is the challenge associated with cap-
turing the attention and time of
respondents. Because we live in a dynamic
world where much of the population is en-
thralled with their electronic gizmos—
whether it is a Palm Pilot, cell phone, Game
Boy, Nintendo, laptop computer or any of
thousands of other business or play de-
vices—electronic surveys offer an oppor-
tunity to capture attention in creative ways.
Our belief when starting this data collec-
tion project was that a properly designed
computer survey might have a hard time
capturing the respondent’s attention on
first sight—hence a lower response rate.
But we believe that once started, the com-
puter survey might collect more reliable
and valid data if we could provide the “fun
feel” of a game or some other electronic
assistant. As with many of the other elec-
tronic devices listed (think of email for one),
we believe that people use these devices
because they seem painless and produc-
tive, even though they may not always help
us communicate or produce more effi-
ciently. In short, a computer survey with
the same number of questions as a printed
survey may give the perception of taking
less time to complete. This is particularly
true since the respondent can quickly visu-
alize the length of a print survey, but is
unable to judge the length of a computer
survey. We thus took precautions to com-
municate the expected length of time for
completion (15-20 minutes) and to put a
notification that appeared halfway through
the survey apprising respondents of their
progress and remaining questions.
Disadvantages and Drawbacks
Unfortunately, nothing in life is free—there
are also some significant drawbacks with
electronic surveys. Probably the biggest
downside is that people are often not com-
pletely comfortable with computer tech-
nologies. Even people who are fairly
computer savvy are not always willing to
spend time learning or trying to figure out
a new application that they will not use
again. Therefore, it is very important to
try to target and control for this factor as
much as possible. The other primary con-
cern involves the issue of data quality. Are
responses to electronic surveys identical,
similar, or different than traditional print
surveys? In what ways do they differ? Ar-
guments can be made on either side—in
some ways the data ought to be of better
quality because of the ability to present in-
formation in a more interactive, dynamic
framework. But other problems may lead
to poorer quality—for example, the rela-
tive computer comfort level may be highly
associated with data quality (along with the
survey’s layout and presentation). A paper
survey offers a quick, obvious look at its
contents, whereas a computer survey is
more hidden. Another argument against
electronic surveys is that the data may be
biased because of the nature of data collec-
tion. For example, if you ask a question
about comfort level with technology, the
results may be higher because of self selec-
tion—respondents who are “comfortable”
using the computer to respond to the sur-
vey are also more comfortable with tech-
nology in general.
At least three other issues crop up with
respect to electronic surveys. There is a
potentially higher risk of lost data—e-mails
can get lost in the ether, or, in our case, the
physical computer disks can get damaged
in transit. (Interestingly, we had three or
four respondents who sent the computer
disk back with no responses. We are not
sure whether this was an accident, or if the
person was trying to send the disk back
without completing it in order to get the
$15 rebate we promised. This trick is harder
to pull with a paper survey!) We also expe-
rienced difficulties preparing the computer
disks for mailing. The software we used to
administer the survey needed to be writ-
ten onto every disk—with an average time
of about 1.5 minutes per disk. This can be-
come a problem when sending out 400
computer disks. We had a research assis-
tant helping with this, but obviously the 10
hours spent loading and unloading disks
into a disk drive were not the most excit-
ing! A last concern is the issue of computer
viruses—many respondents are leery of
transmissions or disks from people they
do not know and researchers must be cog-
nizant of the risk of receiving a virus in
return. In short, there are many potential
problems with electronic surveys, yet many
of these problems can be limited by careful
study design.
The Literature on Electronic Data
Collection
There is a fairly extensive literature on elec-
tronic data collection techniques, but most
of it examines only a single issue or design
factor at a time. Space does not permit a
long review of the literature, so we will
summarize the extant questions we have
found (for a more complete literature re-
view, please contact one of the authors).
Some of the questions that have arisen from
prior studies utilizing electronic surveys
include: (1) Do electronic surveys and pa-
per surveys elicit comparable response
rates to one another? (2) Do people who
respond to surveys using different media
types respond differently to questions? (3)
How can electronic collection methods be
designed to enhance their usefulness as in-
formation collection tools? Our goal is to
provide researchers with some personal
insights from our findings—both positive
and negative, rather than to present a sta-
tistical study of relative reliability, validity,
and quality. Thus, the following section pre-
sents an overview of our experiences.
What We Found
Our contact sample consisted of 1,000 cus-
tomers who had purchased products over
the Internet from a leading retailer of of-
fice supplies. The full survey was conducted
using two methods. First, approximately
60% of the sample was contacted with a
traditional printed survey (four pages in
length), accompanied by a cover letter ex-
plaining that we would provide participants
with a survey summary and a $15 rebate
bonus. The cover letter also explained that
we would keep all results anonymous and
only report aggregate findings. The office
supply company we worked with also pro-
vided a letter stating their interest in the
study and explaining that the authors were
acting as independent, non-biased, third-
party researchers. The second data collec-
tion method involved using a computer
survey program named Sensus. This pro-
gram (available from Sawtooth Technolo-
gies at http://www.sawtooth.com/) allows
a written survey instrument to be coded
on a floppy disk using fairly simple pro-
gramming rules. The respondent then is
asked to load the disk into their computer,
go to the start menu and type “a:run” (we
developed form labels for the disks with
6 Decision Line, July 2001
the respondent’s name and these instruc-
tions). From this point, the program runs
by itself and the user clicks through ques-
tions sequentially and then places the fin-
ished disk in a business-reply envelope and
mails it back.
Our research design allows a con-
trolled comparison of two different data
collection methods. It also differs from a
similar study by Goldby, Savitskie, Stank,
Vickery (2001) in that we mailed out physi-
cal computer disks rather than the elec-
tronic mail approach they used. Both
approaches have pluses and minuses. Since
there has been little application of electronic
surveys in operations management stud-
ies, our goal here is to present our prelimi-
nary findings in order to provide some
insights for other operations management
scholars considering the use of electronic
surveys.
We sent out a total of 1,045 surveys in
our first round of mailings. Several steps
were taken to increase the response rate,
including the inclusion of a business-reply
envelope, an incentive to complete the sur-
vey, and the use of several follow-up let-
ters. The first reminder letter was mailed
two weeks later re-emphasizing the confi-
dential nature and importance of the sur-
vey. A second follow-up letter and a second
copy of the survey were mailed to compa-
nies that had not filled out the original af-
ter six weeks. Very few (less than 5%) of
the mailings were returned due to incor-
rect addresses or the contact person hav-
ing left the company. This high accuracy
rate is due to the currency of the database
we received—most of our contact list had
conducted business with the office supplies
retailer within the last six months.
Finally, we selected 2,000 names be-
cause the office supplies retailer decided to
pre-email these customers and ask if it was
okay for us to contact them. Those who
responded “no” were deleted from the list.
This was done to uphold a privacy policy
regarding customer information not being
sold or given away. Less than 5% of the
customers pre-contacted by email noted
that they did not want to be included in the
study.
The final tally consisted of 416 usable
responses out of 1,045 total surveys, rep-
resenting a 39.8% response rate. The re-
sponse rates for the printed (261/631 =
41.4%) and computer version of the sur-
vey (155/414 = 37.4%) were almost identi-
cal. The overall response rate is higher than
that seen in similar operations management
studies (Kathuria, 2000). Our evidence sug-
gests that it is possible to obtain response
rates for electronic surveys that are com-
parable to those for printed surveys. Based
on our experience, we believe that there
are several actions that can be taken to
improve response rates. These include:
• Carefully targeting the sample (in this case
an Internet purchasing study sent to
Internet customers of a major retailer).
• Careful explanation of the purpose and
usage of the survey data, along with clear
letters of endorsement from the sponsor-
ing company.
• Follow-up, follow-up, and more follow-
up. Wear them down and be persistent!
• Provide a clear incentive for participating
in the survey.
• Make it easy to complete.
• Send something tangible. We sent the
computerized survey on an actual disk
rather than as an attachment or as a link
to a Website. Our feeling is that people are
less likely to throw out something tan-
gible. In contrast, an email is easy to de-
lete.
There is nothing new or particularly
creative about the first five points above.
In fact, these principles apply equally well
to any type of survey instrument. How-
ever, we believe that with electronic sur-
veys it is very important to increase the
tangibility. People tend to respond differ-
ently to something they can touch. Our
goal was to put a disk in their hands that
they might feel guilty about throwing out.
Once they opened the appropriate file, the
disk would also make it easier for them to
fill out the survey. It completely surprised
us that this worked as well as it did!
One interesting outcome was that we
had five respondents who could not open
the computer version of the survey—ei-
ther because of some hardware difficulty
or because they had an Apple computer
and our survey was IBM-compatible. This
represents less than 5% of our sample, so
the problem does not appear to be huge,
but it is worth watching. In these five cases
we had the respondents fill out a print ver-
sion of the survey. Of course, we do not
know if there were many more potential
respondents to the computer survey who
did not fill it out because of similar difficul-
ties. This is certainly something research-
ers need to carefully consider and do their
best to minimize. An important factor to
consider is to develop the instrument with
the lowest common denominator in mind.
To put it another way, many of the respon-
dents have computer equipment that is old
and outdated and will not be able to handle
programs developed on newer software.
A third interesting finding from our
study involves the relationship between the
time to complete the survey and survey
quality. The computer survey recorded the
start and end time for each respondent,
thus allowing us to compute the length of
time spent completing the survey. As
shown in Figure 1, the average response
time was 17 minutes, 53 seconds, but there
is a wide variation in individual times. One
of the questions that we are currently ex-
amining is whether the time taken to com-
plete the survey correlates with the
“quality” of the data. For example, it is
possible that respondents who filled the
survey out quickly (four took less than six
minutes) did so by simply putting their fin-
gers on autopilot and not really reading
and thinking through the questions. That
is very likely the case for the one respon-
dent who completed the entire question-
naire in 1.5 minutes (unless this person was
a speed-reader!) We are in the process of
conducting in-depth assessment of the re-
spondents at the two ends of the spectrum
(quick and slow responses). Clearly, the
ability to track the time spent filling out the
survey offers some value to researchers
who seek to balance and optimize both
quantity and quality of respondents.
Another interesting outcome from the
computer version of the survey is the range
of written responses to more open-ended
questions. Obviously, a printed survey can
have open-ended questions, but there are
two basic problems: (1) the respondent has
to write his/her response, which can be
difficult and hard to read; and (2) the re-
searcher needs to enter the response into
the database. Electronic surveys address
both of these problems. We found that while
many respondents did not respond to open-
ended questions, many did—often volumi-
nously. This less-structured feedback often
provides interesting insights, as well as some
amusing ones. The comments we received
regarding the best/worst features of the
Decision Line, July 2001 7
office supply products Website we were
studying ranged from “I wish we could do
ALL shopping on-line, it is fast and conve-
nient” to “It BITES.” Obviously, there are
some disparate opinions!
Summary
Overall, our initial venture to collect infor-
mation via an electronic survey was fairly
successful. However, numerous questions
remain regarding future applications of
this methodology. While we achieved com-
parable response rates and aggregate mea-
sures of data reliability/validity (means,
standard deviations, and Cronbach’s alphas
were similar for the two data collection
methods), we found the usage of electronic
surveys to be more time and effort inten-
sive. We are currently in the process of per-
forming more sophisticated data analysis
to compare the relative data “quality” of
Dr. Shawnee Vickery
Broad Graduate School of Management
N358 North Business Complex
Michigan State University
East Lansing, MI 48824
517-353-6381
fax: 517-432-1112
vickery@msu.edu
0
2
4
6
8
10
12
14
16
18
20
1 - 3.99 4 - 5.99 6 - 7.99 8 - 9.99 10 -
11.99
12 -
12.99
13 -
13.99
14 -
14.99
15 -
15.99
16 -
17.99
18 -
19.99
20 -
24.99
25 -
29.99
30 -
44.99
45 and
over
Minutes
Number of Surveys
Avera
g
e Response Time = 17:53 minutes
Figure 1: Distribution of response times for computer-based survey.
computer versus print surveys. Essentially
the decision whether to use electronic sur-
veys in the future boils down to a relative
weighting of (1) data quality and (2) data-
gathering cost. We believe that our imple-
mentation of this technology helped
improve data quality, but at a slightly in-
creased data-gathering cost. There is a great
deal of potential to improve both dimen-
sions, but there is also a need for a great
deal more study to determine the efficacy
of electronic surveys as well as the best
methodology for utilizing this tool. ■
References
Cranor, L. F., & Cytron, R. K. (1997). Sen-
sus: A security-conscious electronic poll-
ing system for the Internet. Proceedings
of the Hawaii International Conference on
System Sciences, Wailea, Hawaii.
Goldsby, T. J., Savitskie, K., Stank, T. P., &
Vickery, S. K. (2001). Web-based surveys:
Reaching potential respondents on-line.
Decision Line, 32(2), 4-6.
Kathuria, R. (2000). Competitive priorities
and managerial performance: A tax-
onomy of small manufacturers. Journal
of Operations Management, 18(6), 627-642.