ArticlePDF Available

Electronic Surveys: Advantages and Disadvantages Over Traditional Print Surveys

Authors:

Figures

Content may be subject to copyright.
4 Decision Line, July 2001
RESEARCH ISSUES
SHAWNEE VICKERY, Feature Editor, Eli Broad Graduate School of Management, Michigan State University
Electronic Surveys: Advantages
and Disadvantages Over
Traditional Print Surveys
Kenneth K. Boyer, Michigan State University,
John R. Olson, DePaul University,
and Eric C. Jackson, Michigan State University
T
he world is in the midst of an elec-
tronic communications revolution.
While many people have adopted
electronic communication extensively, oth-
ers are very slow to change. As a result,
the potential for a discontinuity between
people who communicate via traditional
methods and those who use electronic
media has developed. The rapid evolution
of computer hardware and software has
provided a catalyst for businesses to rede-
sign their products and processes. In a simi-
lar manner this technological revolution
has given researchers the ability to design
and collect survey data in new ways. Com-
puter-based and Web-based surveys have
been developed that make the electronic
collection of data easier than ever. How-
ever, there are numerous uncertainties re-
garding issues such as the willingness of
respondents to fill out a computerized or
Web–based survey, the relative accuracy
and reliability of responses, and the best
methods of applying these data collection
techniques.
As part of a study designed to exam-
ine various aspects of online retailing of
office supply products, we designed a mini-
experiment to compare two data collection
methods—a four-page, printed survey sent
out by regular mail and a computerized
version of the same survey sent to partici-
pants on a computer disk. This article ex-
amines the potential advantages and
disadvantages of administering a survey
via computer and reports our experiences
and findings.
Advantages of Computer Surveys
There are numerous advantages of a com-
puter-administered survey, many of which
are due to the greater ability to present or
record information. Questions can be writ-
ten with more complete descriptions be-
cause a computer survey is not
space-constrained, as with a printed one.
For example, a question that refers to a
specific technology or business practice as-
sumes that the respondent is familiar with
the technology or practice. However, this
assumption is clearly not always true and a
computerized survey can be set up to ei-
ther provide a definition automatically or
as a pop-up feature in much the same way
as a link in a Website. The recent problems
with the 2000 presidential election in Florida
illustrate that even a simple survey or bal-
lot can run into difficulties. Electronic sur-
veys offer new methods of controlling for
dimpled chads, pregnant chads, and hang-
ing chads—but only if they are carefully
designed to emphasize ease of use for all
users—as the paper Florida ballots were
not! Experts have lobbied for electronic sur-
veys as a way of simplifying and securing
elections for years (Cranor and Cytron,
1997). Perhaps now they will find a more
receptive audience.
Another potential advantage of elec-
tronic surveys is the ability to include pic-
tures, special formatting, audio or video
links along with straight text. Researchers
can use these features to emphasize or
draw attention to critical aspects of a ques-
tion or ask a new type of question. It can
be very helpful to include multimedia fea-
tures for a couple of reasons. First, mul-
tiple formats can help clarify the questions
being asked—a picture or video clip can
add substance to a written description. In a
similar fashion, the multimedia capability
of the electronic surveys allows research-
ers to ask for specific responses to audio or
Ken Boyer
is an associate professor in
marketing and supply chain
management at Michigan
State University. He earned
a B.S. in mechanical engi-
neering from Brown Univer-
sity, and a M.A. and Ph.D.
in business administration
from Ohio State University. Dr. Boyers research
interests focus on the strategic management of
operations, electronic commerce and the effective
use of advanced manufacturing technologies. He
has published articles in Management Science,
Decision Sciences, Journal of Operations
Management, and Business Horizons, among
others. His research has won the 1997 Chan
Hahn award and the 1996 Stan Hardy award.
He is a member of the Academy of Management,
Decision Sciences Institute and the Society of
Manufacturing Engineers.
boyerk@bus.msu.edu
John Olson
is an assistant professor in
the Department of Manage-
ment at DePaul University.
Dr. Olson has earned a B.S.
in mathematics and econom-
ics from the University of
Minnesota, M.B.A. from St.
Cloud State University, and
a Ph.D. from the University of Nebraska. Dr.
Olson’s research interests focus on the effective
management of information in the supply chain
including e-commerce applications, EDI systems
and Just-in-Time systems. He has published ar-
ticles in Interfaces, Journal of Manufactur-
ing Systems and co-authored a book in
Just-in-Time management. He is a member of
the Academy of Management, Decision Sciences
Institute and INFORMS.
Eric Jackson
is in his third year as a doc-
toral candidate in operations
research at Michigan State
University. His principal
interests are in the applica-
tions of Complex systems to
quality concerns in business.
He has degrees in chemistry
and chemical engineering from the University
of Michigan and worked as the technical director
for a specialty chemical company before entering
the Ph.D. program at Michigan State.
Decision Line, July 2001 5
visual questions. Second, and perhaps more
importantly, the fundamental problem
with collecting data of any kind with a sur-
vey is the challenge associated with cap-
turing the attention and time of
respondents. Because we live in a dynamic
world where much of the population is en-
thralled with their electronic gizmos—
whether it is a Palm Pilot, cell phone, Game
Boy, Nintendo, laptop computer or any of
thousands of other business or play de-
vices—electronic surveys offer an oppor-
tunity to capture attention in creative ways.
Our belief when starting this data collec-
tion project was that a properly designed
computer survey might have a hard time
capturing the respondent’s attention on
first sight—hence a lower response rate.
But we believe that once started, the com-
puter survey might collect more reliable
and valid data if we could provide the “fun
feel” of a game or some other electronic
assistant. As with many of the other elec-
tronic devices listed (think of email for one),
we believe that people use these devices
because they seem painless and produc-
tive, even though they may not always help
us communicate or produce more effi-
ciently. In short, a computer survey with
the same number of questions as a printed
survey may give the perception of taking
less time to complete. This is particularly
true since the respondent can quickly visu-
alize the length of a print survey, but is
unable to judge the length of a computer
survey. We thus took precautions to com-
municate the expected length of time for
completion (15-20 minutes) and to put a
notification that appeared halfway through
the survey apprising respondents of their
progress and remaining questions.
Disadvantages and Drawbacks
Unfortunately, nothing in life is free—there
are also some significant drawbacks with
electronic surveys. Probably the biggest
downside is that people are often not com-
pletely comfortable with computer tech-
nologies. Even people who are fairly
computer savvy are not always willing to
spend time learning or trying to figure out
a new application that they will not use
again. Therefore, it is very important to
try to target and control for this factor as
much as possible. The other primary con-
cern involves the issue of data quality. Are
responses to electronic surveys identical,
similar, or different than traditional print
surveys? In what ways do they differ? Ar-
guments can be made on either side—in
some ways the data ought to be of better
quality because of the ability to present in-
formation in a more interactive, dynamic
framework. But other problems may lead
to poorer quality—for example, the rela-
tive computer comfort level may be highly
associated with data quality (along with the
survey’s layout and presentation). A paper
survey offers a quick, obvious look at its
contents, whereas a computer survey is
more hidden. Another argument against
electronic surveys is that the data may be
biased because of the nature of data collec-
tion. For example, if you ask a question
about comfort level with technology, the
results may be higher because of self selec-
tion—respondents who are “comfortable”
using the computer to respond to the sur-
vey are also more comfortable with tech-
nology in general.
At least three other issues crop up with
respect to electronic surveys. There is a
potentially higher risk of lost data—e-mails
can get lost in the ether, or, in our case, the
physical computer disks can get damaged
in transit. (Interestingly, we had three or
four respondents who sent the computer
disk back with no responses. We are not
sure whether this was an accident, or if the
person was trying to send the disk back
without completing it in order to get the
$15 rebate we promised. This trick is harder
to pull with a paper survey!) We also expe-
rienced difficulties preparing the computer
disks for mailing. The software we used to
administer the survey needed to be writ-
ten onto every disk—with an average time
of about 1.5 minutes per disk. This can be-
come a problem when sending out 400
computer disks. We had a research assis-
tant helping with this, but obviously the 10
hours spent loading and unloading disks
into a disk drive were not the most excit-
ing! A last concern is the issue of computer
viruses—many respondents are leery of
transmissions or disks from people they
do not know and researchers must be cog-
nizant of the risk of receiving a virus in
return. In short, there are many potential
problems with electronic surveys, yet many
of these problems can be limited by careful
study design.
The Literature on Electronic Data
Collection
There is a fairly extensive literature on elec-
tronic data collection techniques, but most
of it examines only a single issue or design
factor at a time. Space does not permit a
long review of the literature, so we will
summarize the extant questions we have
found (for a more complete literature re-
view, please contact one of the authors).
Some of the questions that have arisen from
prior studies utilizing electronic surveys
include: (1) Do electronic surveys and pa-
per surveys elicit comparable response
rates to one another? (2) Do people who
respond to surveys using different media
types respond differently to questions? (3)
How can electronic collection methods be
designed to enhance their usefulness as in-
formation collection tools? Our goal is to
provide researchers with some personal
insights from our findings—both positive
and negative, rather than to present a sta-
tistical study of relative reliability, validity,
and quality. Thus, the following section pre-
sents an overview of our experiences.
What We Found
Our contact sample consisted of 1,000 cus-
tomers who had purchased products over
the Internet from a leading retailer of of-
fice supplies. The full survey was conducted
using two methods. First, approximately
60% of the sample was contacted with a
traditional printed survey (four pages in
length), accompanied by a cover letter ex-
plaining that we would provide participants
with a survey summary and a $15 rebate
bonus. The cover letter also explained that
we would keep all results anonymous and
only report aggregate findings. The office
supply company we worked with also pro-
vided a letter stating their interest in the
study and explaining that the authors were
acting as independent, non-biased, third-
party researchers. The second data collec-
tion method involved using a computer
survey program named Sensus. This pro-
gram (available from Sawtooth Technolo-
gies at http://www.sawtooth.com/) allows
a written survey instrument to be coded
on a floppy disk using fairly simple pro-
gramming rules. The respondent then is
asked to load the disk into their computer,
go to the start menu and type “a:run” (we
developed form labels for the disks with
6 Decision Line, July 2001
the respondent’s name and these instruc-
tions). From this point, the program runs
by itself and the user clicks through ques-
tions sequentially and then places the fin-
ished disk in a business-reply envelope and
mails it back.
Our research design allows a con-
trolled comparison of two different data
collection methods. It also differs from a
similar study by Goldby, Savitskie, Stank,
Vickery (2001) in that we mailed out physi-
cal computer disks rather than the elec-
tronic mail approach they used. Both
approaches have pluses and minuses. Since
there has been little application of electronic
surveys in operations management stud-
ies, our goal here is to present our prelimi-
nary findings in order to provide some
insights for other operations management
scholars considering the use of electronic
surveys.
We sent out a total of 1,045 surveys in
our first round of mailings. Several steps
were taken to increase the response rate,
including the inclusion of a business-reply
envelope, an incentive to complete the sur-
vey, and the use of several follow-up let-
ters. The first reminder letter was mailed
two weeks later re-emphasizing the confi-
dential nature and importance of the sur-
vey. A second follow-up letter and a second
copy of the survey were mailed to compa-
nies that had not filled out the original af-
ter six weeks. Very few (less than 5%) of
the mailings were returned due to incor-
rect addresses or the contact person hav-
ing left the company. This high accuracy
rate is due to the currency of the database
we received—most of our contact list had
conducted business with the office supplies
retailer within the last six months.
Finally, we selected 2,000 names be-
cause the office supplies retailer decided to
pre-email these customers and ask if it was
okay for us to contact them. Those who
responded “no” were deleted from the list.
This was done to uphold a privacy policy
regarding customer information not being
sold or given away. Less than 5% of the
customers pre-contacted by email noted
that they did not want to be included in the
study.
The final tally consisted of 416 usable
responses out of 1,045 total surveys, rep-
resenting a 39.8% response rate. The re-
sponse rates for the printed (261/631 =
41.4%) and computer version of the sur-
vey (155/414 = 37.4%) were almost identi-
cal. The overall response rate is higher than
that seen in similar operations management
studies (Kathuria, 2000). Our evidence sug-
gests that it is possible to obtain response
rates for electronic surveys that are com-
parable to those for printed surveys. Based
on our experience, we believe that there
are several actions that can be taken to
improve response rates. These include:
Carefully targeting the sample (in this case
an Internet purchasing study sent to
Internet customers of a major retailer).
Careful explanation of the purpose and
usage of the survey data, along with clear
letters of endorsement from the sponsor-
ing company.
Follow-up, follow-up, and more follow-
up. Wear them down and be persistent!
Provide a clear incentive for participating
in the survey.
Make it easy to complete.
• Send something tangible. We sent the
computerized survey on an actual disk
rather than as an attachment or as a link
to a Website. Our feeling is that people are
less likely to throw out something tan-
gible. In contrast, an email is easy to de-
lete.
There is nothing new or particularly
creative about the first five points above.
In fact, these principles apply equally well
to any type of survey instrument. How-
ever, we believe that with electronic sur-
veys it is very important to increase the
tangibility. People tend to respond differ-
ently to something they can touch. Our
goal was to put a disk in their hands that
they might feel guilty about throwing out.
Once they opened the appropriate file, the
disk would also make it easier for them to
fill out the survey. It completely surprised
us that this worked as well as it did!
One interesting outcome was that we
had five respondents who could not open
the computer version of the survey—ei-
ther because of some hardware difficulty
or because they had an Apple computer
and our survey was IBM-compatible. This
represents less than 5% of our sample, so
the problem does not appear to be huge,
but it is worth watching. In these five cases
we had the respondents fill out a print ver-
sion of the survey. Of course, we do not
know if there were many more potential
respondents to the computer survey who
did not fill it out because of similar difficul-
ties. This is certainly something research-
ers need to carefully consider and do their
best to minimize. An important factor to
consider is to develop the instrument with
the lowest common denominator in mind.
To put it another way, many of the respon-
dents have computer equipment that is old
and outdated and will not be able to handle
programs developed on newer software.
A third interesting finding from our
study involves the relationship between the
time to complete the survey and survey
quality. The computer survey recorded the
start and end time for each respondent,
thus allowing us to compute the length of
time spent completing the survey. As
shown in Figure 1, the average response
time was 17 minutes, 53 seconds, but there
is a wide variation in individual times. One
of the questions that we are currently ex-
amining is whether the time taken to com-
plete the survey correlates with the
“quality” of the data. For example, it is
possible that respondents who filled the
survey out quickly (four took less than six
minutes) did so by simply putting their fin-
gers on autopilot and not really reading
and thinking through the questions. That
is very likely the case for the one respon-
dent who completed the entire question-
naire in 1.5 minutes (unless this person was
a speed-reader!) We are in the process of
conducting in-depth assessment of the re-
spondents at the two ends of the spectrum
(quick and slow responses). Clearly, the
ability to track the time spent filling out the
survey offers some value to researchers
who seek to balance and optimize both
quantity and quality of respondents.
Another interesting outcome from the
computer version of the survey is the range
of written responses to more open-ended
questions. Obviously, a printed survey can
have open-ended questions, but there are
two basic problems: (1) the respondent has
to write his/her response, which can be
difficult and hard to read; and (2) the re-
searcher needs to enter the response into
the database. Electronic surveys address
both of these problems. We found that while
many respondents did not respond to open-
ended questions, many did—often volumi-
nously. This less-structured feedback often
provides interesting insights, as well as some
amusing ones. The comments we received
regarding the best/worst features of the
Decision Line, July 2001 7
office supply products Website we were
studying ranged from “I wish we could do
ALL shopping on-line, it is fast and conve-
nient” to “It BITES.” Obviously, there are
some disparate opinions!
Summary
Overall, our initial venture to collect infor-
mation via an electronic survey was fairly
successful. However, numerous questions
remain regarding future applications of
this methodology. While we achieved com-
parable response rates and aggregate mea-
sures of data reliability/validity (means,
standard deviations, and Cronbach’s alphas
were similar for the two data collection
methods), we found the usage of electronic
surveys to be more time and effort inten-
sive. We are currently in the process of per-
forming more sophisticated data analysis
to compare the relative data “quality” of
Dr. Shawnee Vickery
Broad Graduate School of Management
N358 North Business Complex
Michigan State University
East Lansing, MI 48824
517-353-6381
fax: 517-432-1112
vickery@msu.edu
0
2
4
6
8
10
12
14
16
18
20
1 - 3.99 4 - 5.99 6 - 7.99 8 - 9.99 10 -
11.99
12 -
12.99
13 -
13.99
14 -
14.99
15 -
15.99
16 -
17.99
18 -
19.99
20 -
24.99
25 -
29.99
30 -
44.99
45 and
over
Minutes
Number of Surveys
Avera
g
e Response Time = 17:53 minutes
Figure 1: Distribution of response times for computer-based survey.
computer versus print surveys. Essentially
the decision whether to use electronic sur-
veys in the future boils down to a relative
weighting of (1) data quality and (2) data-
gathering cost. We believe that our imple-
mentation of this technology helped
improve data quality, but at a slightly in-
creased data-gathering cost. There is a great
deal of potential to improve both dimen-
sions, but there is also a need for a great
deal more study to determine the efficacy
of electronic surveys as well as the best
methodology for utilizing this tool.
References
Cranor, L. F., & Cytron, R. K. (1997). Sen-
sus: A security-conscious electronic poll-
ing system for the Internet. Proceedings
of the Hawaii International Conference on
System Sciences, Wailea, Hawaii.
Goldsby, T. J., Savitskie, K., Stank, T. P., &
Vickery, S. K. (2001). Web-based surveys:
Reaching potential respondents on-line.
Decision Line, 32(2), 4-6.
Kathuria, R. (2000). Competitive priorities
and managerial performance: A tax-
onomy of small manufacturers. Journal
of Operations Management, 18(6), 627-642.
... [17] Using an ODK in a mobile device enables offline data collection (in places without internet), which can be uploaded to the server later. [18,19] As mobile devices become increasingly more available and affordable, efforts should be sustained at maximizing their potential in routine health care and research. Therefore, adopting the implementation research approach, this intervention study aimed to pilot the use of mobile electronic methods for the real-time capture and transmission of immunization data using ODK. ...
Article
Full-text available
A BSTRACT Background Despite the efforts invested in generating quality data for routine immunization (RI) in Nigeria, significant improvement is yet to be reported, largely due to the multiple reporting, summation, and data transfer processes associated with the current paper-based reporting system. Aim This study piloted the use of electronic capture and transmission of RI data using Open Data Kit (ODK) in selected health facilities in Enugu State to determine its effect on internal consistency, completeness, timeliness, and validity on RI data. Materials and Methods An intervention study adopting the implementation research approach was conducted in 12 local government areas (LGAs) in Enugu State, Nigeria: six intervention LGAs and six control LGAs. Four RI data sets were built into two ODK data collection tools and deployed in Android phones for RI data capture and transmission in 60 randomly selected primary health care centers (PHCs) from six intervention LGAs (10 PHC per LGA) for three months. A second set of 60 health facilities was randomly selected from another six different LGAs as a control. A total of 10,663 RI data captured within this period were processed and analyzed using Microsoft Excel and SPSS version 25. Results Only 49 (81.7%) of the 60 intervention PHCs transmitted RI data using the ODK, and the majority of the PHCs (81, 74.3%) were also from rural areas. RI data captured and transmitted using ODK had internal consistency in more health facilities where intervention had taken place (46, 93.9%) than in health facilities where the paper-based method was used (33, 55.0%), representing a 70.1% marginal increase in internal consistency. Internal consistency was significantly associated with intervention status (intervention and non-intervention sites) and location (urban/rural) at P value = 0.001 and 0.044, respectively. Data transmitted electronically using ODK were also 2.9 times more likely to have internal consistency than those captured and transmitted with the paper-based method ( P = 0.001). Data from urban areas were also 1.5 times more likely to have internal consistency than those from rural areas ( P = 0.011). Conclusion Despite its challenges, such as poor power supply, poor network coverage, and device specification and the capacity of health workers, electronic capture and transmission of RI data using ODK is effective in improving RI data internal consistency, completeness, and validity.
... 3 Patrick McSharry is a professor of machine learning and big data at Carnegie Mellon University and Oxford University options they have for avoiding problematic road conditions such as tolls and railways, may not be applicable in developing countries [3]. Furthermore, in East Africa and most developing countries, the methodology and process for assessing road conditions continues to rely heavily on field visits, surveys, and questionnaires, which take a long time [3], are expensive, and are vulnerable to human error [7]. And as a result of the low visibility of these methods, corruption can occur in some cases [27] [10]. ...
Article
Full-text available
Traditional survey methods for gathering information, such as questionnaires and field visits, have long been used in East Africa to evaluate road conditions and prioritize their development. These surveys are time-consuming, expensive, and vulnerable to human error. Road building and maintenance, on the other hand, has long experienced multiple challenges due to a lack of accountability and validation of conventional approaches to determining which areas to prioritize. With the digital revolution, a lot of data is generated daily such as call detail record (CDR), that is likely to contain useful proxy data for spatial mobility distribution across different routes. In this research we focus on satellite imagery data with applications in East Africa and Google Maps suggested inter-city roads to assess road conditions and provide an approach for infrastructure prioritization given mobility patterns between cities. With increased urban population, East African cities have been expanding in multiple directions affecting the overall distribution of residential areas and consequently likely to impact the mobility trends across cities. We introduce a novel approach for infrastructure prioritization using deep learning and big data analytics. We apply deep learning to satellite imagery, to assess road conditions by area and big data analytics to CDR data, to rank which ones could be prioritized for construction given mobility trends. Among deep learning models considered for roads condition classification, EfficientNet-B3 outperforms them and achieves accuracy of 99%.
... The survey was distributed electronically through the QuestionPro software offered by Sun University because it offers (1) email distribution and social network integration via Facebook and Twitter, (2) mobile-compatible surveys for easy access, (3) real-time reports, and (4) easy export to Excel. Besides, offering surveys online represents other advantages like having multimedia capabilities, and also, this may give the perception that the survey may take less time to complete than doing it by hand, which might produce higher efficiency (Boyer et al., 2001). ...
... Additionally, as the survey was distributed via electronic newsletter and social media, it is possible that segments of both memberships who do not access these mediums, less familiar with navigating electronically, not active participants of the organization, or who do not respond to surveys were missed through this process (Boyer et al., 2001). All of these factors would likely reduce the response rate for studies in general that are distributed in this manner. ...
Article
Introduction: Affecting 42% of physicians in the United States, “physician burnout” is considered a public health crisis with adverse impacts for personal providers, patients and organizations. Though much research has been devoted to the existence and causes of burnout, to date few studies have been conducted to identify factors associated with physician fulfillment. Aims: This dissertation research was intended to increase and improve knowledge of factors effective in contributing to physician professional fulfillment utilizing a Positive Deviance based approach. To that end, two studies were conducted: the first, a narrative review of the literature to identify specific correlates of physician fulfillment and lower burnout; and second, a qualitative analysis of physician identified protective factors from burnout, and quantitative analysis of demographic and practice characteristics and their association with fulfillment and burnout from a survey of Arizona physicians. Methods: In the narrative review, a four-phased approach was utilized. Guided by a Positive Deviance informed Social-Ecological framework, the review examined existing literature, including study design, setting, populations, positive and protective factors, key fulfillment outcomes, association of factors with outcomes and social ecological dimensions. In the qualitative study, physician beliefs about self-protection from burnout were categorized by three thematic nodes, and emerging nodes by the inductive analysis, as well as subnodes. In the quantitative study, demographic data and practice characteristics were analyzed in which professional fulfillment and burnout were identified. A Chi squared test of significance and multiple logistic regression was conducted to identify associations between demographic data and practice characteristics with professional fulfillment and burnout. Results: The narrative review identified 18 articles, which showed promising intervention strategies, especially through collaborative approaches across dimensions of the model, for physician fulfillment at: the individual level through mindfulness programs, coping strategies and resilience training; the interpersonal level through team building, strengthening support systems and mentorship; and the organizational level through expanding administrative support, changing incentive structures and workload alleviation. The qualitative analysis found that respondents perceived the most protective factors from burnout to be time away from clinical practice, physician-initiated solutions, reduced clinical effort, professional autonomy and control over clinical practice. An emergent node, “No Knowledge/lack of Awareness” had almost 14 percent of respondents. Quantitative findings suggest that physicians who self-identify as male, are older, own their clinical practice and several years of clinical practice experience have more professional fulfillment, and younger physicians and inpatient physicians have more burnout. Conclusion: Narrative review findings suggest that correlates and strategies aimed at reducing burnout and increasing physician fulfillment focused on lower levels of the ecological model, representing missed opportunities for examining higher population based levels of the model and indicating a need for future research to fill this gap. The qualitative analysis findings suggest a need for future research of promising strategy approaches at the individual physician and peer interpersonal levels, and how those approaches may be incorporated into organizational based interventions. Quantitative findings can serve as a foundation to direct future research on causes of professional fulfillment and burnout.
... 3 Patrick McSharry is a professor of machine learning and big data at Carnegie Mellon University and Oxford University collection such as paper questionnaires, face-to-face interviews and field visits [3]. These survey methods are prone to human error and are often slow and costly [14]. Yet, with the digital revolution a lot of unstructured data is generated daily that is likely to contain useful proxy data in line with various economic variables, and could be analyzed to support existing policy and generate new insights [12]. ...
Article
Full-text available
Monitoring and assessing the distribution of economic areas in East Africa such as low and high income neighborhoods, has typically relied on the use of structured data and traditional survey approaches for collecting information such as questionnaires, interviews and field visits. These types of surveys are slow, costly and prone to human error. With the digital revolution, a lot of unstructured data is generated daily that is likely to contain useful proxy data for many economic variables. In this research we focus on satellite imagery data with applications in East Africa. Recently East African cities have been developing at a fast pace by building new infrastructure and constructing innovative economic zones. Moreover with increased urban population, cities have been expanding in multiple directions affecting the overall distribution of areas with economic activity. Automatic detection and classification of these areas could be used to inform a number of policies such as land usage and could also assist with policy enforcement monitoring. On the other hand, the distribution of different economic areas in a specific city could provide proxies for various economic development variables such as income distribution and poverty metrics. In this research, we apply deep learning techniques to satellite imagery to classify and assess the distribution of various economic areas of a specific region for urban planning. By benchmarking performance against various state-of-art models, results show that the proposed deep learning techniques yielded superior performance with an f1-score of 99%.
... The authors considered that data collected using social media is an important solution because it has many advantages. The international literature mentions the advantages and disadvantages of this methodology (e.g., Boyer, Olson & Jackson, 2001;Fricker & Schonlau, 2002;Medlin, Subroto & Theong, 2005;Metha & Sivadas, 1995;Van Selm & Janovski, 2006;Wright, 2005;Zhang, 1999). The reason that they used this methodology is that teachers use the internet and social media in their daily lives (i.e., Facebook or Gmail) (Kamtsios & Lolis, 2016). ...
Article
Full-text available
p>Few studies have been conducted in Greece focusing on the inclusion of pupils with Intellectual Disability (ID) in general classes. The aim of this quantitative study was to explore the attitudes of Special Education Teachers (SETs or SET) towards the inclusion of pupils with ID in general classes. A structured questionnaire was sent out. It consisted of 10 demographic questions and of 6 structured closed-ended questions about the inclusion of this group of pupils in general classes, using a 5-point Likert scale. The study sample consisted of 150 SETs [129 (86%) females and 21 (14%) males]. The questionnaire data were analysed via descriptive and inferential statistics (SPSS version 21). The results of the research showed that the majority of SETs had a positive attitude towards the fact that the special and general education teacher should jointly adapt the classroom according to the needs of the pupil with ID and that inclusive education is appropriate for these pupils. The SETs had a neutral to positive attitude about whether pupils with ID should be educated in general settings with pupils without disabilities and whether they should be removed less frequently from general education settings in order to be given more help with their difficulties. Finally, there were some dependent variables that played a major role in participants' responses, such as education, work experience, work and the structure of employment. In conclusion, SETs point out that they are in favour of the inclusion of pupils with ID. Finally, the results of the study are discussed. Article visualizations: </p
... The analysis and monitoring of the spatial distribution of these areas in East Africa has previously relied on the use of structured data and traditional approaches for data collection such as paper questionnaires, face-to-face interviews and field visits [1]. These survey methods are prone to human error and are often slow and costly [11]. Yet, with the digital revolution a lot of unstructured data is generated daily that is likely to contain useful proxy data in line with various economic variables, and could be analyzed to support existing policy and generate new insights [10]. ...
Preprint
Full-text available
div>In this research we use data from a number of different sources of satellite imagery. Below we describe and visualize various metrics of the datasets being considered. Satellite imagery is retrieved from Google earth which is supported by Data SIO (Scripps Institution of Oceanography), NOAA (National Oceanic and Atmospheric Administration), US. Navy (United States Navy), NGA (National Geospatial-Intelligence Agency), GEBCO (General Bathymetric Chart of the Oceans), Image Landsat, and Image IBCAO (International Bathymetric Chart of the Arctic Ocean). Using random sampling of spatial area in Kigali per target area, 342,843 thousands images were retrieved under the five categories: residential high income (78941), residential low income(162501), residential middle income(101401), commercial building, (67400) and industrial zone,(24400). For the industrial zone, we also included some images from Nairobi, Kenya industrial spatial area. The average number of samples for a category is 86929. The size of the sample per category is proportional to the size of the spatial target area considered per category. Kigali is located at latitude:-1.985070 and longitude:-1.985070, coordinates. Nairobi is located at latitude:-1.286389 and longitude:36.817223, coordinates.</div
... The analysis and monitoring of the spatial distribution of these areas in East Africa has previously relied on the use of structured data and traditional approaches for data collection such as paper questionnaires, face-to-face interviews and field visits [1]. These survey methods are prone to human error and are often slow and costly [11]. Yet, with the digital revolution a lot of unstructured data is generated daily that is likely to contain useful proxy data in line with various economic variables, and could be analyzed to support existing policy and generate new insights [10]. ...
Preprint
Full-text available
div>In this research we use data from a number of different sources of satellite imagery. Below we describe and visualize various metrics of the datasets being considered. Satellite imagery is retrieved from Google earth which is supported by Data SIO (Scripps Institution of Oceanography), NOAA (National Oceanic and Atmospheric Administration), US. Navy (United States Navy), NGA (National Geospatial-Intelligence Agency), GEBCO (General Bathymetric Chart of the Oceans), Image Landsat, and Image IBCAO (International Bathymetric Chart of the Arctic Ocean). Using random sampling of spatial area in Kigali per target area, 342,843 thousands images were retrieved under the five categories: residential high income (78941), residential low income(162501), residential middle income(101401), commercial building, (67400) and industrial zone,(24400). For the industrial zone, we also included some images from Nairobi, Kenya industrial spatial area. The average number of samples for a category is 86929. The size of the sample per category is proportional to the size of the spatial target area considered per category. Kigali is located at latitude:-1.985070 and longitude:-1.985070, coordinates. Nairobi is located at latitude:-1.286389 and longitude:36.817223, coordinates.</div
Thesis
Full-text available
Η παρούσα έρευνα είχε σκοπό τη διερεύνηση των γνώσεων και απόψεων των εκπαιδευτικών της Περιφερειακής Ενότητας Δράμας σχετικά με τις Προστατευόμενες Περιοχές της υπόψη Περιφερειακής Ενότητας και την εκπαιδευτική αξιοποίησή τους. Για τις ανάγκες της έρευνας κατασκευάστηκε ερωτηματολόγιο κλειστού τύπου το οποίο συμπληρώθηκε από 70 εκπαιδευτικούς. Από τη στατιστική επεξεργασία των απαντήσεων φάνηκε ότι όλοι οι ερωτηθέντες εκπαιδευτικοί γνώριζαν και είχαν επιστεφτεί κάποια από τις Προστατευόμενες Περιοχές της περιοχής έρευνας. Οι περιβαλλοντικές γνώσεις και οι πεποιθήσεις τους σχετικά με τις περιοχές αυτές προέκυψαν κυρίως από την εμπειρία τους καθώς λίγοι από τους ερωτηθέντες είχαν επιμορφωθεί και/ή συμμετάσχει σε θέματα Περιβαλλοντικής Εκπαίδευσης. Λίγοι ήταν επίσης όσοι είχαν αξιοποιήσει τις συγκεκριμένες Προστατευόμενες Περιοχές στην διδασκαλία δηλώνοντας σε μεγάλο ποσοστό ότι αντιμετώπισαν εμπόδια στην προσπάθεια αυτή.
Chapter
Full-text available
Customers need change and their lifestyle changes their behaviour. The market is clearly adapted to changes in customers behaviour and shopping, creating new industries inspired by domestic and foreign culture, habits and lifestyle. The main goal of the research was to find out which categories of snack meal customers prefer and find out main customers determinations of selected categories in snack meal sector. Respondents were asked three questions divided into groups due age, gender and social status and three questions about customer preferences, which kind of food customers preferred. Relevant answers to the questions of the research helped to identify customer’s profile, the decisive personal and social factors that determine the customer’s purchase in the Snack Meal sector.
Article
Full-text available
T he popular business press has closely monitored the rapid ascension of electronic commerce over the past few years. We are well aware of the explosion in B2C and B2B Web-enabled commerce. In most cases, the Web is helping us to fulfill transactions in a new, more efficient way. But the Web is also helping researchers to realize new, more efficient ways to conduct the business of research. One such use of the Web is for survey administration. Questions remain regarding the viability of Web-based surveys among prospective users. This article presents the common motivations for and reservations concerning administering research surveys on-line. Advantages and disadvantages of online survey methodology are reviewed. This article also provides a brief overview of the process and lessons learned from the authors' recent experience with this emerging research mechanism.
Article
Full-text available
Much of the research in manufacturing strategy has focused on specific relationships between a few constructs, with relatively little emphasis on typologies and taxonomies [Bozarth, C., McDermott, C., 1998. Configurations in manufacturing strategy: a review and directions for future research. Journal of Operations Management 16 (4) 427–439]. Using data from 196 respondents in 98 manufacturing units, this study develops a taxonomy of small manufacturers based on their emphasis on several competitive priorities. The annual sales for 64% of the participating units in this study are below US$50 million, which is on the lower side as compared to other studies in this area [cf., Miller, J.G., Roth, A.V., 1994. A taxonomy of manufacturing strategies. Management Science 40 (3) 285–304]. The study findings indicate that different groups of manufacturers — Do All, Speedy Conformers, Efficient Conformers, and Starters — emphasize different sets of competitive priorities, even within the same industry. Further, the Do All types, who emphasize all four competitive priorities, seem to perform better on customer satisfaction than their counterparts in the Starters group. The above findings lend support to the sandcone model but contradict the traditional trade-off model.
Conference Paper
Presents the design and implementation of Sensus, a practical, secure and private system for polling (conducting surveys and elections) over computer networks. Expanding on the work of Fujioka, Okamoto and Ohta (1993), Sensus uses blind signatures to ensure that only registered voters can vote and that each registered voter only votes once, while at the same time maintaining voters' privacy. Sensus allows voters to verify independently that their votes were counted correctly and to anonymously challenge the results should their votes be miscounted. We outline seven desirable properties of voting systems and show that Sensus satisfies these properties well, in some cases better than traditional voting systems
Web-based surveys: Reaching potential respondents on-line. Decision Line
  • L F Cranor
  • R K Cytron
  • T J Goldsby
  • K Savitskie
  • T P Stank
  • S K Vickery
Cranor, L. F., & Cytron, R. K. (1997). Sensus: A security-conscious electronic polling system for the Internet. Proceedings of the Hawaii International Conference on System Sciences, Wailea, Hawaii. Goldsby, T. J., Savitskie, K., Stank, T. P., & Vickery, S. K. (2001). Web-based surveys: Reaching potential respondents on-line. Decision Line, 32(2), 4-6.