ArticlePDF Available

The effect of computer-assisted interviewing on data quality: a review of the evidence

Authors:

Abstract

Computer assisted telephone interviewing, and to a lesser degree, computer assisted face-to-face interviewing, are by now widely used in survey research. Recently, self-administered forms of computer-assisted data collection, such as web surveys, have become extremely popular. Advocates of computer assisted interviewing (CAI) claim that its main advantages are improved data quality and lower costs. This paper summarizes what is currently known about computer assisted data collection methods. The emphasis is on data quality and the influence of technology on the respondent.
This is a 2008 extension and update of an overview which appeared in J. Blasius, J. Hox, E. de
Leeuw & P. Schmidt (Eds.), Social science methodology in the new millennium. Opladen,
FRG: Leske + Budrich., 2002, CD-ROM
The Effect of Computer-Assisted Interviewing on Data Quality: A Review of the Evidence
Edith D. de Leeuw1
MethodikA/Department of Methodology and Statistics, Utrecht University
Abstract
Computer assisted telephone interviewing, and to a lesser degree, computer assisted face-to-face
interviewing, are by now widely used in survey research. Recently, self-administered forms of
computer-assisted data collection, such as web surveys, have become extremely popular.
Advocates of computer assisted interviewing (CAI) claim that its main advantages are improved
data quality and lower costs. This paper summarizes what is currently known about computer
assisted data collection methods. The emphasis is on data quality and the influence of
technology on the respondent.
Key words: Computer assisted data collection, CADAC, CASIC, CATI, CAPI, CASI, DBM,
web surveys, data quality, acceptance, usability, human factor
1. Introduction
Whether computer assisted data collection should be used in survey research seems no longer an
issue of debate. Computer assisted methods have replaced paper-and-pen methods at an
increasing pace, and in Europe and North America many government survey organizations now
employ these new methods for their surveys (De Leeuw, Nicholls, Andrews, & Mesenbourg,
2000). Large market research organizations and academic research organizations are following
(Blyth, 1998; Collins, Sykes & O'Muircheartaigh, 1998). Characteristic of computer assisted
data collection is that questions are read from the computer screen, and that responses are
entered directly into the computer, either by an interviewer or by the respondent self. An
interactive program presents the questions in the proper order, which may be different for
different (groups of) respondents. For a historical overview see Couper and Nicholls (1998). For
a taxonomy of various forms of computer assisted data collection methods, see Appendix A; a
more detailed overview can be found in De Leeuw and Nicholls (1996) and in De Leeuw, Hox,
and Snijkers (1998).
Computer Assisted Telephone Interviewing (CATI) is the oldest form, and is also the
most prevalent. CATI is now the dominant method for telephone surveys in market research,
government organizations and universities, although paper-and-pencil methods are still being
used with good results in small survey organizations and for short surveys. For face-to-face
interviews, Computer Assisted Personal Interviewing (CAPI) is rapidly gaining in popularity
and is already widely used in government statistical agencies and market research firms, and
1 The author thanks Jean Martin, Peter Lynn, Tony Manners, Joop Hox, and Bill Nicholls II for their helpful
comments.
research departments at universities are following. The later were very quick to see the potential
of CAPI when surveying special populations, especially in combination with Computer Assisted
Self Interviewing (CASI): a computer assisted form of the self-administered questionnaire (for
an overview see De Leeuw, E., Hox, J., & Kef, S. 2003).
Computerized self-administered data collection takes many forms. The oldest is the
electronic questionnaire or electronic test, which is used in the medical and psychological
sciences (cf. Weisband & Kiesler, 1996). In survey research, CASI is frequently used during
CAPI-sessions on sensitive topics, when the interviewer hands over the computer to the
respondent for a short period, but remains available for instructions and assistance. This is
equivalent to the traditional procedure where an interviewer might give a paper questionnaire to
a respondent to fill in privately. A very promising variant is Audio-CASI, where the respondent
listens to the questions read by a computer-controlled digitized voice over a headset, and at the
same time sees the question on the computer screen. This helps overcome literacy problems with
special populations and guarantees the privacy of the respondent (cf. Turner et al, 1998;
Johnston & Walton, 1995).
For the traditional mail survey computer assisted equivalents have been developed too.
Disk-by-Mail has been used on a regular basis for some time, especially in establishment
surveys, and methodological knowledge on how to implement a successful Disk-by-Mail survey
is available (e.g., Ramos, Sedivi, & Sweet, 1998; Saltzman, 1992; Witt & Bernstein, 1992; Van
Hattum & De Leeuw, 1999). In a Disk-by-Mail survey (DBM) a disk containing the
questionnaire and a self-starting interview program is mailed to the respondent via the postal
service. The respondent runs the program on his or her own computer and returns the diskette
containing the completed questionnaire.
Electronic mail surveys (EMS), Internet surveys and web surveys differ from DBM in
the sense that respondents receive the request and return the survey data electronically through
the Internet. This is a field still very much in development, and its usefulness depends on the
Internet penetration in specific countries. In some countries, such as, the USA, Scandinavia, and
the Netherlands, the Internet penetration is relatively high, and web surveys are being done on a
regular basis; in other countries, web surveys are only possible with special populations, but the
experience is positive and a multimode approach has proved to be successful (Clayton &
Werking, 1998; Schaefer & Dillman, 1998; Couper, 2000, Dillman, 2000; De Leeuw & Hox,
2008; Lozar Manfreda & Vehovar, 2008). Another way to overcome the limited computer
access in web surveys, is computer assisted panel research. A panel of households is selected
and computers and communication equipment are provided by the research institute. Surveys are
sent electronically to the household members on a regular basis, and after completion are sent
back automatically. This approach proved successful for consumer panels in the Netherlands
(Saris, 1998). For an illustrative example see the CentERpanel (see www.centerdata.nl),
located at Tilburg University
One of the main reasons that computer assisted data collection has become popular so
quickly was the general expectation that it would improve data quality and efficiency and reduce
costs (cf. Blyth, 1998; De Leeuw & Nicholls, 1996; Nelson et al, 1972). In the last two decades
these claims have been investigated mainly through empirical mode comparisons of
computerized and paper-and-pen versions of the same questionnaire. These studies mainly focus
on data quality, only few studies also investigate costs. In the remainder of this paper I will
concentrate on data quality.
I start with a model for the influence of computer assisted interviewing, discriminating
between technological and methodological data quality. I will proceed with a short overview of
empirical evidence for technological data quality, timeliness and cost reduction. I will then focus
on methodological data quality: what happens in the interview situation and how it influences
data quality. Since acceptance of computer-assisted methods is an important criterion by itself, I
also include research on the attitudes and opinions of interviewers and respondents. I end with a
discussion on the challenges that new emerging technologies offer.
2. Survey data quality and computer-assisted interviewing
As early as 1972, Nelson, Peyton, and Bortner pointed out that automatic routing to the next
question and range checks on the given answers would enhance data quality. They emphasize
technological or operational data quality: the reduction of previously required post interview
data processing activities (Nicholls, 1996). Operational data quality is affected by all the
technological possibilities of computer assisted interviewing.
Factors associated with the visible presence of a computer and its effect on the interview
situation may affect data quality, apart from the technical aspects. These factors affect
methodological data quality, defined by an absence of nonsampling survey bias and error
(Nicholls, 1996; cf. Groves, 1989).
Recently, Total Quality Management (TQM, Deming, 1982) has received much
attention in industrial settings and in a lesser degree in statistical and survey establishments. As a
consequence, additional criteria for 'good' data collection methods have been formulated. The
most important are timeliness and costs: does a new technology provide the data more quickly
and does it reduce the costs? ( Blyth, 1998).
2.1. Potential for improving technological data quality
Compared to an optimally implemented paper-and-pencil interview, the optimally implemented
computer assisted interview has five apparent advantages.
(1) There are no routing errors. If a computer system is correctly programmed, routing
errors, that is, errors in the question order, skipping and branching, do not occur. Based on
previously given answers the program decides what the next question must be, and so both
interviewer and respondent are guided through the questionnaire. Missing data because of
routing and skipping errors does not occur. Also, questions that do not apply to a specific
respondent are automatically skipped. As a result, automatic routing reduces the number of data
errors.
(2) Data can be checked immediately. An optimally implemented interview program will
perform some internal validity checks. The simplest checks are range checks that compare the
given response to the range of possible responses. Thus the program will refuse the response '8'
to a seven-category Likert scale, and then ask to correct the response. Range checks are
straightforward when the question has only a limited number of response categories. More
complicated checks analyze the internal consistency of several responses. Consistency checks
are more difficult to implement; one must anticipate all valid responses to questions, list possible
inconsistencies, and devise a strategy for the program to cope with them. In a paper-and-pencil
study, internal validity checks have to be conducted at the data cleaning stage that usually
follows the data collection stage. However, when errors are detected, they can only be recoded
to a missing data code because it is no longer possible to ask the respondents what they really
meant. During a computer-assisted interview there is an opportunity to rephrase the question and
correct range and consistency errors. This should lead to fewer data entry errors and missing
data.
(3) The computer offers new possibilities to formulate questions. One example is the
possibility to randomize the order of questions in a scale, giving each respondent a unique
question order. This will eliminate systematic question order effects. Response categories can
also be randomized, which avoids question format effects (e.g., recency effects). The computer
can also assist in the interactive field coding of open questions using elaborate coding schemes,
which would be unmanageable without a computer. Finally, the computer can be used to employ
question formats such as drawing line lengths as in psychophysical scaling, and other forms of
visual aids, which in paper and pencil methods are more awkward to use.
(4) There is no separate data entry phase. This means that no extra errors are added. It
also implies that the first tabled results can be available soon after the data collection phase. On
the other hand, construction, programming, and checking of the questionnaire take considerable
time in computer-assisted data collection. Thus, a well-planned computer-assisted survey has a
real advantage when the results must be quickly available right after data collection (as in
election forecasts).
2.2. Potential for improving methodological data quality
The visible presence of a computer may affect data quality, apart from the technical aspects of
using a computer. Both usability (e.g., simple human-computer interface, learning) and
psychological factors can play a role. As with most technological innovations the effects are for
the most part temporary. After some time, one gets used to the new machine, and its influence
on the situation becomes smaller. In the early days, even in the USA, one of the early adapters to
technological innovations, in 1997 only around 45% of the households had computers and the
proportion with Internet access was around 22% (cf. Witt, 1997). But these numbers have
increased rapidly. In 1998, the US Bureau of the Census estimated that 26% of the households
used Internet at home, and the percentage of persons using internet (inside and outside the
home) was estimated at 33% or a third of the adult population of 18 years and older, which
corresponds to 65 million US adults (Couper, 2000). But the picture differs from country to
country; although Internet access is still growing and now around 70% of the US population
has access to the net, the picture is diverse with percentages ranging from 76% coverage for
Sweden to 3.6% in Africa (www.internetworldstats.com, data from August 2007).
Compared to traditional paper and pencil methods, the presence of a computer could
lead to the following effects (positive and negative) on how the whole data collection procedure
is perceived.
(1) Less privacy. When one is totally unfamiliar with computers there could be a 'big
brother' effect, leading to more refusals and socially desirable answers to sensitive questions.
When researchers first started to use computer assisted data collection, this was a much-feared
effect.
(2) More privacy. Using a computer could also lead to the expectancy of greater privacy
by the respondents; responses are typed directly into the computer and cannot be read by anyone
who happens to find the questionnaire. Much depends here on the total interview situation and
how the survey is implemented.
(3) Trained interviewers may feel more self-confident using a computer, and behave
more professionally. This in turn could lead to more confidence and trust of the respondent in
the interviewing procedure as a whole.
(4) The knowledge that the system accurately records information about the interview
process itself (e.g., time and duration of the interview, the interval between interviews and the
order in which they are carried out) inhibits interviewers to 'cheat'.
(5) The use of a computer may distract interviewers. They have to pay attention to using
the computer correctly and typing in the answers accurately. If interviewers cannot touch-type,
typing in long answers may lead to less eye contact between interviewers and respondents,
causing the interviewers to miss nonverbal reactions of the respondents. If the computer is
located between the interviewer and the respondent, even the physical distance may be greater
than in a paper and pen interview. These factors all weaken the ‘rapport’ between interviewer
and respondent; as a consequence the interview may not be conducted optimally, and data
quality may suffer.
(6) On the other hand, a well-trained and experienced interviewer can rely on the
computer for routing and complex question sequences, and therefore pay more attention to the
respondent and the social processes involved in interviewing.
2.3. Potential for increased timeliness and reduced costs
Going from paper-and-pencil to computer assisted interviewing asks for initial investment, not
only in equipment, but also in time. One has to invest in hardware, software and in acquiring
hardware- and software-related knowledge and skills. In addition, basic interviewer training now
needs to include training in handling a computer and using the interviewing software.
After the initial investments are made, a computer-assisted survey may be less costly and
quicker than traditional data collection, but it all depends on the study: its complexity, its size,
and its questionnaire. To evaluate the cost efficiency and timeliness of a computer assisted
survey, a distinction should be made between front-end processing and back-end processing. In
general, a well-designed computer assisted survey requires investing more time, effort, and
money in the beginning of the research (front-end processing), which is saved at the end stage
(back-end processing). Especially the design and implementation of range and consistency
checks (front-end) reduces the time needed to prepare the data for the analysis (back-end); and
no questionnaires have to be printed, entered, or coded.
3. Empirical evidence for improved quality: technological data quality, timeliness and cost.
3.1. Technological data quality
Technological data quality was defined above as the reduction of previously required post
interview data processing activities. Using a well-programmed and tested interview program can
reduce the number of errors in the data by preventing mistakes (cf. section 2.1). Empirical
studies confirm this expectation.
Computer Assisted Telephone Interviewing (CATI).
In their review Groves and Nicholls (1986) conclude that CATI leads to less missing data
because it prevents routing errors. For instance, Groves & Mathiowetz (1984) found five times
more skip errors in paper telephone surveys then in CATI. It is not therefore not surprising, that
post hoc data cleaning finds more errors with traditional paper-and-pencil methods than with
CATI. However, no difference is found in respondent induced missing data (i.e., 'do-not-know'
and 'no-answer' responses). The same conclusions are drawn by Weeks (1992), Martin and
Manners (1995), and Nicholls, et al (1997).
In addition, Catlin and Ingram (1988) studied the possible effects of computer use on
open questions; they found no differences between computer assisted and paper interviewing in
codability of answers to open questions or length of answers (number of words used). See also
Kennedy et al (1990).
Computer Assisted Personal Interviewing (CAPI)
The percentage of missing data is clearly lower in CAPI, mostly because interviewers cannot
make routing errors (Sebestik, Zelon, DeWitt, O'Reilly & McCowan, 1988; Olsen, 1992).
Bradburn et al. (1992) found in a pilot CAPI study that the number of missing data caused by
respondents ('do-not-know', 'no-answer') also diminishes, but in the main study this was not
replicated (Baker & Bradburn, 1992; Olsen, 1992). Other studies also fail to find a difference in
respondent induced missing data (Bemelmans-Spork, Kerssemakers, Sikkel & Van
Sintmaartensdijk, 1985; Martin, et al., 1994).
Little is known about data quality regarding open questions. Baker (1992) summarizes a
study by the French National Institute for Statistical and Economical Research (INSEE) that did
not find any difference between PAPI and CAPI in this respect.
Computer Assisted Self Interviewing (CASI, CSAQ)
Computer Assisted Self Administered Questionnaires (CSAQ) and Computer Assisted Self
Interviewing (CASI) make it possible to use very complex questionnaires without the aid of an
interviewer. But also in standard, less complex self-administered questionnaires, CASI reduces
item nonresponse (see Ramos, et al, 1998). In a well-designed and thoroughly tested computer
questionnaire, it is not possible for a respondent to skip a question by mistake. This is clearly
illustrated by the findings of Van Hattum & De Leeuw (1999). They used computer assisted self
administered questionnaires in primary schools and compared data from paper and pencil
(PAPI) self administered questionnaires with data from computer assisted self administered
questionnaires (CSAQ). In the CSAQ-condition the mean percentage of missing values was
5.7% (standard deviation= 3.4%), while in the PAPI-condition the mean of the percentage
missing values was 14.1% (standard deviation= 25.0%). It is interesting to note that not only the
average amount of missing data is less in computer assisted data collection, but also that the
individual differences, indicated by the standard deviation, are smaller. Van Hattum & De
Leeuw (1999) attribute this to the fact that with a paper questionnaire children who are not
concentrating on the task or who are careless can easily skip a question or even a whole page by
mistake, while CSAQ forces children to be more precise.
A small number of studies have explicitly compared respondent entry errors in
computerized versus paper and pen questionnaires. Fewer respondent errors are reported in
CASI than in paper and pen self-administered questionnaires. For an overview, see Nicholls et al
(1997).
3.2. Timeliness and costs
A distinction should be made between front-end processing and back-end processing (cf. 2.3). In
general, front-end processing (i.e., developing, implementing and testing the questionnaire)
takes more time and is therefore more expensive. On the other hand, no data-entry is needed and
data editing and data cleaning take less time: back-end processing is faster. With very large
surveys this will save time. In general, there is no difference in the total time needed for the
research. But once the interviewing has started, results are available much faster than in
traditional paper-and-pencil interviewing. Samuels (1994) mentions a reduction of delivery time
of 50% for the results of an omnibus survey. When timeliness and a fast release of results are
important for a client, this is an important advantage of computer-assisted data collection over
paper-and-pencil methods. During interviewing, time may be saved by the improved efficiency
of computer assisted sample management (Nicholls & De Leeuw, 1996).
Computer Assisted Telephone Interviewing (CATI).
Most studies that attempt to weigh the costs and advantages of CATI conclude that the initial
investments in hardware and software pay off only for large scale or regularly repeated surveys
(Groves & Tortora, 1998). A rule of thumb is that the break-even point is at about thousand
telephone interviews. Below that number, the argument of cost reduction is, by itself not
sufficient to use CATI (Weeks, 1992).
Computer Assisted Personal Interviewing (CAPI)
CAPI requires a larger investment in hardware, software and support staff than CATI (Blyth,
1998). These high fixed costs are only compensated by lower flexible costs (e.g. saving in
printing cost, data-entry, and editing) for large scale surveys. Bond (1991) states that even when
computers are used frequently in the fieldwork it will take at least a year before the investment
starts to pay back.
There is limited empirical data on cost comparisons between computer assisted and
paper and pencil personal interviews. Two studies systematically assess costs for CAPI: initial
investment in hardware and software was excluded, but extra fieldwork costs for training and
supervision were included. Sebestik et al. (1988) compared costs in a small scale CAPI
experiment. Their conclusion is that overall CAPI was more expensive, mostly because of added
costs in training and supervising interviewers. In a larger experiment Baker and Bradburn (1992)
conclude that CAPI was still more expensive (±12%) than PAPI; the cost reduction in entering
and cleaning data was not large enough to offset the higher training and supervision costs. Baker
(1990) extrapolates these findings and concludes that when fixed hardware costs are excluded,
approximately 1500 CAPI interviews are needed to reach the break-even point between
increased front-end and decreased back-end costs.
Computer Assisted Self Interviewing (CASI, CSAQ)
Computer assisted self-administered questionnaires (CSAQ) and Disk-by-Mail and e-mail
surveys have the advantage that no interviewers are needed, so in comparison with CATI and
CAPI they save costs. This is one of the main reasons why Baker (1998) predicts a decline of
interviewing and a rise of CASI and CSAQ2. When one compares computer assisted procedures
with the traditional paper mail survey cost savings are not so obvious. As with all forms of
computer assisted data collection, the extra investment in programming the questionnaire and
debugging only pays off for large surveys where printing and editing make the paper form more
costly (cf. Ramos, et al, 1998). In Disk-by-Mail, the mailing costs include a special protective
envelope. Also, a disk is heavier than a short paper questionnaire, which makes DBM in
generally somewhat more costly than paper mail questionnaires (Saltzman, 1992). However,
when large numbers of longer questionnaires have to be mailed, DBM can be a real cost saver.
Van Hattum and De Leeuw (1999) systematically compare costs for a DBM and a paper mail
survey of 6000 pupils in primary schools. They conclude that the average cost for a completed
questionnaire is 1.01 US dollars for a Disk-by-Mail survey and 3.22 US dollars for a paper-and-
pen mail survey.
E-mail and web surveys pose an extra challenge for Europe. Clayton and Werking
(1998) describe the cost savings (e.g., labour, postage) in a e-mail survey of businesses.
Transmission costs (telephone) are practically zero. However, unlike the USA, in most
European countries local telephone calls are not free! This not only increases the costs for the
researcher, but also increases the costs (internet connect time both receiving and sending) for the
potential respondent. To ensure high response rates, one has to find ways to reduce respondent
costs comparable to prepaid return postage in mail surveys, or reimburse factual costs. In panel
surveys (e.g., Saris, 1998), the research organization usually reimburses costs and gives
2 One of the major problems with e-mail surveys and Disk by Mail is still coverage. For a successful application,
one is restricted to surveying special groups. Another possibility is a multimode approach (cf Schaeffer & Dillman,
1999, De Leeuw, 2005).
additional incentives.
4. Empirical evidence for improved quality: acceptance of new technology and
methodological data quality.
4.1. Acceptance of new technology
The use of a computer may have an influence on the behaviour of both interviewer and
respondent. Therefore, in the first applications of computer assisted interviewing special
attention was paid to the acceptance of the new technology.
Acceptance by interviewers CATI and CAPI
In the early days, when systems were slow and portable computers heavy, interviewer
acceptance was not general. Acceptance depended strongly on the speed and reliability of
systems (Nicholls et al, 1997; De Leeuw et al, 1998). With modern systems acceptance is high
(Weeks, 1992). Well-trained interviewers are positive about computer-assisted interviewing.
They appreciate the support that a good system offers when complex questionnaires are
employed (Riede & Dorn, 1991; Edwards, Bittner, Sherman Edwards & Sperry, 1993), they like
working with the computer (Martin et al., 1994), and derive a feeling of professionalism from it
(Edwards et al., 1993). However, crucial for acceptance is that interviewers are well-trained in
general computer skills, in the specific computer assisted interview system that is used, and in
general interviewing techniques (cf. Woijcik & Hunt, 1998). Besides training, ergonomic factors
are of influence too: readability of screens, well-defined function keys, and usability, are
important factors for acceptance (De Leeuw & Nicholls, 1996; Couper et al, 1997). In addition,
a good human-computer interface may contribute to the avoidance of human errors.
Acceptance by respondents and unit nonresponse: CATI and CAPI
In telephone interviews, respondents as a rule will not notice whether a computer is used or not,
it is not surprising that no differences in unit nonresponse are found between CATI and
traditional paper and pen telephone interviews (cf. Nicholls et al. 1997, De Leeuw et al., 1998).
When computer assisted personal interviewing was introduced researchers were afraid of a
negative effect on response rates. But even in the first applications of the method in Sweden and
the Netherlands this did not occur (Van Bastelaer, Kerssemakers & Sikkel, 1987, p 39; Van
Bastelaer et al., 1988). Later studies confirm that CAPI and paper-and-pencil methods yield
comparable response rates in studies in the U.S.A. (Bradburn, Frankel, Baker & Pergamit, 1992;
Sperry, Bittner & Brandon, 1991; Thornberry, Rowe & Biggar, 1991), England (Martin,
O'Muircheartaigh & Curtice, 1994), Sweden (Statistics Sweden, 1989) and Germany (Riede &
Dorn, 1991). These studies also report very low percentages of spontaneous negative reactions
by respondents (1-4%). Most reactions are neutral or positive.
When respondents are explicitly asked for a reaction to using the computer they
generally react positively and are found to prefer (Woijcik & Baker, 1992). Baker (1990, 1992)
reports that most respondents find CAPI interesting, and attribute a greater degree of
professionalism to CAPI. The social interaction with the interviewer is generally described as
comfortable and relaxed. Only a small percentage (5%) reports negative feelings.
Acceptance by respondents and unit nonresponse: CASI
Various forms of computer assisted self-administered questionnaires appear to be appreciated by
the respondents3; they evaluate it positively and find it interesting and easy to use (for overviews
see Ramos et al, 1998; De Leeuw et al, 1998). Beckenbach (1995) reports that more than 80% of
the respondents had no problem at all using the computer or the interviewing program, and that
few respondents complained about physical problems such as eye-strain.
The general positive appreciation also shows in the relative high response ratio with
Disk By Mail (DBM) surveys. DBM response rates in market research vary between 25% and
70%, and it is not unusual to have response ratio's of 40 to 50 percent without using any
reminders (Saltzman, 1992). Assuming that this is a special population interested in the research
topic, an ordinary well conducted mail survey using no reminders may be expected to yield
about 35% response (Dillman, 1978; Heberlein & Baumgartner, 1978). The high response rates
may be partly caused by the novelty value of DBM, which will diminish over time. It should be
noted that Ramos et al (1998) found no evidence for higher response rates in DBM in academic
and government surveys.
How e-mail or web surveys will develop further remains unsure. The novelty value is
wearing of, and electronic junk-mail (SPAM) is increasing fast. Also, one mouse-click is enough
to through away anything unwanted or uninteresting. This could lead to extreme low response
rates, which would threaten the validity of the conclusions. That nonresponse is a serious
problem for Internet surveys is illustrated by Lozar Manfreda et al (2007). In a carefully
conducted meta-analysis, they studied 45 empirical comparisons and found that on average
web surveys yield an 11% lower response rate than comparable paper mail and telephone
surveys To ensure an acceptable response for e-mail and web surveys one should carefully
analyze what electronic surveys different (e.g., security of the net, costs). These issues should be
carefully addressed, in doing this we can learn from the past. Many principles that in the past
have proved to be successful in paper mail surveys, can be successfully translated to electronic
surveys (De Leeuw, 1997; Schaeffer & Dillman, 1999; Dillman, 2000). But, we have to go one
step further, we must learn to optimally use the enormous audio-visual potential of this new
medium (Couper, 2000).
There are promising results from panel-surveys, which use Internet. In the Netherlands at
Tilburg University, general population household panels are now completely operating through
Internet. Of course, panel members received instruction in how to use the new technology, a
help-desk is available, and all costs are reimbursed (Centerdata: www.centerdata.nl). In the USA
Knowledge networks works along the same principles. These panels have in general high
response rates on individual surveys. Of course, what one should keep in mind is that the most
crucial stage in longitudinal research is the recruitment stage, where candidates for panel
membership may fail to participate for a variety of reasons. If this initial nonresponse is taken
into account, response is low (Sikkel, Hox, de Leeuw, 2008).
4.2. Methodological data quality
Computer Assisted Telephone Interviews (CATI)
In telephone interviews the computer is not visible present. Respondents may occasionally hear
keyboard clicks, or be told by the interviewers that a computer is used. No systematic research
has been done on the effects of this knowledge, but the general impression is that it makes no
difference to respondents if they know that their answers are typed directly into a computer
(Catlin & Ingram, 1988; Groves & Nicholls, 1986; Weeks, 1992). It is therefore not surprising
3 One should note that CSAQ/CASI is restricted to special populations. As a consequence, research is based on
selected populations, which either had access to computers, or received a computer for the duration of the study (e.g.
De Leeuw, Hox, Kef, 2003).
that there are no indications for any differences in methodological data quality between
computer assisted and paper and pen telephone interviews. CATI does lead to less missing data
because it prevents routing errors, but there is no difference in respondent induced missing data
because of 'don't know' and 'no answer' responses. Also, no differences in 'openness' or social
desirability are found (Groves & Nicholls, 1986; Weeks, 1992).
Interviewers, however, know that a computer system is used, and that more rigid control
takes place. Computer assisted interviewing often leads to a greater standardization of the
interview, to the extent that interviewers sometimes complain about 'rigidity' (Riede & Dorn,
1991, p 51). In general, researchers appreciate this greater standardization because this
minimizes interviewer bias (Fowler, 1991). There is some confirmation of greater
standardization of interviewer behaviour in CATI: in a controlled comparative study, using the
same interviewers both for traditional and for computer assisted interviews, Groves and
Mathiowetz (1984) found less interviewer variance in CATI than in the paper-and-pencil
method.
Computer Assisted Personal Interviewing (CAPI)
In face-to-face interviews the computer is highly visible and respondents may react to its
presence. This could influence respondents’ trust in the privacy of the data. When researchers
first started to use CAPI, they feared a 'big brother' effect, leading to more refusals and socially
desirable answers to sensitive questions. An alternative hypothesis was that the use of a
computer could also lead to feelings of greater privacy by the respondents; responses are typed
directly into the computer and cannot be read by anyone who happens to find the questionnaire.
There is no hard empirical evidence for either hypothesis. The acceptance of computer assisted
face-to-face interviewing is high for both respondents and interviewers, and there are no
indications that using a computer disturbs the interviewing situation (Beckenbach, 1992).
Bradburn et al. (1992) found in a pilot CAPI study that the amount of missing data
explicitly caused by respondents ('do-not-know', 'no-answer') also diminished, but in the main
study this is not replicated (Baker & Bradburn, 1992; Olsen, 1992). Other studies also fail to
find a difference in respondent induced missing data (Bemelmans-Spork, Kerssemakers, Sikkel
& Van Sintmaartensdijk, 1985; Martin, et al., 1994).
An early and much cited comparative study by Waterton (1984, see also Waterton &
Duffy, 1984) reports a positive effect of CAPI with a sensitive question about alcohol
consumption; using the CAPI method more alcohol consumption was reported, which means
that presumably CAPI was less affected by social desirability bias. However, in the CAPI mode
the sensitive question was asked by letting the respondent type their own answers into the
computer, unseen by the interviewers, which makes this part of the interview like a self-
administered questionnaire (CASI). In the traditional paper and pen mode, the question was
asked by the interviewer and the answer was taken down by the interviewer. Since self-
administered questionnaires typically show less social desirability bias than face-to-face
interviews (for an overview, see De Leeuw, 1992), the reported difference between PAPI and
CAPI in this study may well correspond to a difference between an interview and a self-
administered questionnaire, and not to a technology effect.
Studies that do compare paper and pen face-to-face interviewing and computer assisted
personal interviewing and therefore focus on the effect of the new technology more purely, do
report slightly less social desirability bias with CAPI (Baker & Bradburn, 1992; Bradburn et al.,
1992; Martin et al., 1994; Tourangeau & Smith, 1998). However, the differences are very small,
generally being smaller than differences typically found in comparisons of face to face versus
telephone interviews or experienced versus inexperienced interviewers (Olsen, 1992).
Tourangeau and Smith (1998) also found an interesting interaction with location of interview.
When the interview took place in the respondent’s home, the computer assisted version
produced more 'openness' in answers. However, when interviewed outside the home in a health
clinic, fewer open answers were given and the computer assisted version revealed fewer sex
partners than the paper and pen version. This suggests that setting is important. It is probably
more the way respondents perceive the total (computer assisted) interview situation, than the use
of the computer itself, that influences methodological data quality.
Computer Assisted Self Interviewing (CASI, CSAQ, Web)
There is strong evidence that for paper-and-pen modes, self-administered questionnaires are
better at eliciting sensitive information than interviews (for an overview, see De Leeuw, 1992;
De Leeuw & Collins, 1997). Computer-assisted self-interviewing has the additional advantage
that complex questionnaires with many routings (e.g., health inventories) can now be
administered in self-administered form. Whether a computer-assisted form also will produce
more open answers and more self-disclosure than a paper and pen questionnaire has been the
topic of a number of studies.
Several studies showed more self-disclosure on sensitive topics (e.g., abortion, male-
male sexual contact) when using CASI (cf. Turner, et al., 1998; Tourangeau & Smith, 1998).
There is some evidence that the use of Audio-CASI does not chance this effect (Turner et al.,
1998; O'Reilly,et al., 1994). In a meta-analysis of 39 studies, Weisband and Kiesler (1996)
found a strong significant effect in favor of computer methods. This effect was stronger for
comparisons between CASI and face-to-face interviews. But, even when CASI was compared
with self-administered paper-and-pen questionnaires, self-disclosure was significantly higher in
the computer condition. The effect reported was larger when more sensitive information was
asked. Weisband and Kiesler (1996) also report the interesting finding that the effect is
diminishing over the years, although it has not disappeared! They attribute the diminishing effect
to a growing familiarity with computers and their possibilities among the general public.
Interestingly, their meta-analysis also showed that the data were not homogeneous. This means
that although the general trend was in favor of computer assisted methods, some studies showed
the opposite effect.
Recent research suggests that these contradictory findings could be attributed to the
interview situation and perceived privacy. For instance, Van Hattum & De Leeuw (1999)
compared CASI and paper self-administered questionnaires in a study on bullying at primary
schools. They reported more openness and less social desirability in the CASI condition. In their
study, pupils worked at the class computer, in a special room or a quiet secluded corner of the
class. The paper questionnaires were administered in the classroom, where care was taken that
pupils could not see each other’s answers. Beebe et al (1998) investigated illicit drug use among
young high school students. They compared in-class self-administered paper tests which were
placed in an envelope with computer assisted questionnaires that were administered in the
computer lab. In this study, the paper self-administered test produced more openness in the
reporting of sensitive behavior. However, further analyses showed that the distance between
computer stations in the lab was crucial. For those students who were more than five feet away
from each other in the computer lab, the answers were very similar to the answers of the
students who used a paper questionnaire. These two studies underscore the extreme importance
of perceived confidentiality.
When using computer assisted questionnaires one should take careful precautions to gain
respondents’ trust. The setting and the implementation of the questionnaire should reassure the
respondent about confidentiality. Simple precautions, like masking the answer or refreshing the
screen when the answer has been given, will probably do the trick. Also, whenever, other
persons are in the same room - be it interviewers, family members, teachers, or other students in
a lab - they should be kept at some distance.
Internet surveys are self-administered, so there is no interviewer present to assist the
respondent when difficulties arise. Internet surveys are by definition computer-assisted and
share all the advantages of computer-assisted surveys, which means that complex
questionnaires with controlled routing (skipping and branching) can be used. Empirical
research on the quality of web surveys is still scarce, although the number of studies is growing
rapidly. There is some indication that Internet surveys are more like mail than like telephone
surveys, with more extreme answers in telephone surveys than in Internet surveys (Dillman,
Phelps, Tortora, Swift, Kohrell, and Beck, 2001; Oosterveld and Willems, 2003). More
extremeness in telephone interviews was earlier found in comparisons with paper mail surveys
and is attributed to visual versus auditive information transmission (De Leeuw, 1992; Schwartz
et al 1991); the same mechanism may be responsible for differences between telephone and
Internet surveys. Comparisons between web and mail surveys give mixed results, some studies
find more partial response and more item nonresponse in web surveys (Lozar Manfreda,
Vehovar, and Batagelj, 2001; Bates, 2001), others report less item nonresponse in web surveys
than in mail surveys (McMahon, et al, 2003; Beullens, 2003). Regarding substantive responses
no clear picture emerges; Oosterveld and Willems (2003) found little to no differences between
CATI and Internet in a well-controlled experiment. Beullens (2003) reported some differences,
but when he controlled for differences in background characteristics due to self-selection, the
differences between mail and web are negligible. However, Link and Mockdad (2005) did find
differences in alcohol reporting and other health related estimates between web and mail
surveys, and Bäckström and Nilsen (2004) reported differences in substantive answers between
paper and web questionnaires in a student evaluation survey.
Trust is an important aspect in web surveys, as is illustrated by study in Japan, which
emphasizes the importance of mutual trust for (non)response and data quality (Yoshimura &
Ohsumi, 2000). In an early study Kiesler and Sproull (1986) found fewer socially desirable
answers in an electronic questionnaire than in the paper mail version. Subsequent studies
(Mitchell, 1993) found no differences. Extrapolating the findings summarized above on both
CASI and internet surveys, I suggest that in e-mail and web surveys privacy and security could
be crucial factors when asking for sensitive information. Respondents should have the feeling
that their answers are safe, and encryption in combination with an icon to convey the message
(e.g., a key) should be standard. When designing special surveys, we should focus more on the
human-computer interaction and the perceptions and reactions of the respondent. In the end it is
the respondent not the technology that matters (cf. Blyth, 1998)
5. Summary
Computer assisted telephone interviewing and computer assisted face-to-face interviewing, are
by now widely used in survey research. Computer-assisted self-interviewing is gaining in
popularity, especially through web surveys and Internet panels.
Computer assisted data collection has a high potential to improve data quality. This
together with the expectations that it would also improve efficiency and reduce costs, was why
computer assisted data collection has become popular so quickly. However, for most of these
potential advantages the empirical evidence is still limited. Systematic comparisons of costs and
efficiency are rare, and the evidence for cost and time reduction is not very strong. A well
designed-computer assisted survey requires investing more time, money, and effort in the
beginning of the process (front-end processing), which is saved at the end stage (back-end
processing. These investments will only pay off in large scale or regularly repeated surveys.
There is little evidence that the use of CAPI, CATI and Disk-by-Mail surveys improves
response rates. Conversely, there is also no evidence for a decrease in response rates. How e-
mail and web surveys will develop remains uncertain. The novelty value is wearing off and
electronic junk mail is increasing. In addition, there may be financial costs (telephone costs and
connect time for most European countries). To ensure an acceptable response and good data
quality, one should carefully analyse what makes web surveys different (e.g., security, access to
the net of different demographic groups, connect costs, influence of screen lay-out on
measurement error and the robustness of lay-out for different web-browsers), and address these
issues in the design of the survey.
Internet surveys are fast and have low cost. Being self-administered there are no
interviewer effects, although the use of pictures and visual illustrations may influence
respondents answers. The largest problems in Internet surveys are coverage and nonresponse.
Often, the sample in a Web survey is not a probability sample from a general population, and
there is no good method for generating random samples of email addresses. In addition,
measurement problems arise because questionnaires may look different in different browsers
and on different monitors, and respondents may have different levels of computer expertise.
There is ample empirical research of improved technological data quality in computer-
assisted data collection. A well-programmed and tested interview program will have range and
consistency checks, and prevent routing errors, which results in far less item nonresponse.
Computer assisted data collection is no panacea for good data quality. It requires one to do
almost everything that is needed with a good paper-and-pen interview or questionnaire, and to
add extra effort in computer implementation, in testing the questionnaire, in designing an
ergonomic screen lay-out, in extra interviewer training, and in designing a respondent friendly
and trustworthy questionnaire. However, this investment is earned back in far less interviewer
error and the error-free administration of complex questionnaires.
There is some evidence that computerized methods of data collection improve
methodological data quality. Respondents are less inhibited and show more self-disclosure
when sensitive questions are used. But this effect may be diminishing over time, as some studies
suggest. Also, there is evidence that much depends on the perception of the interview situation
by the respondent and on careful design of the total study and of the computer interface.
6. Discussion
Computer assisted data collection has a high potential to increase timeliness of results, improve
data quality, and reduce costs in large surveys. However, for most of these potential advantages
the empirical evidence is still limited. The majority of studies investigate the acceptability by
respondents and some aspects of data quality. A systematic comparison of costs is difficult, and
consequently these are rare. When the total costs of paper-and-pencil and computer assisted
survey research are compared, the evidence for cost reduction is not very strong. The
investments will only pay off in large scale or regularly repeated surveys.
At present, there is ample empirical research of improved technological data quality in
computer-assisted data collection. A well-programmed and tested interview program will have
range and consistency checks, and prevent routing errors, which results in far less item
nonresponse. However, computer assisted data collection is not being used to its full potential,
and the various aspects of data quality that have been studied are too limited (cf. Sikkel, 1998).
The strength of computer assisted data collection methods is the ability to increase the power of
interviewing and thus to answer more complex research questions. We should explore the
potential of the computer and use techniques for data collection that are impossible or
impractical with paper and pencil methods. For instance, randomization of question order and
randomization of the order of response categories can be implemented to avoid well-known
order effects. Also, with the aid of computer assisted interviewing very complex questions can
be asked and continuous response scales can be used in 'standard' interview situations (e.g.,
computerized diaries, vignettes, magnitude estimation). Measurement techniques that would be
almost impossible to use without a computer are natural grouping, adaptive conjoint analysis,
and tailored or controlled dependent interviewing (Sikkel, 1998; De Leeuw et al, 1998).
There is little evidence that CATI or CAPI improve response rates. Conversely, there is
also no evidence for a decrease in response rates. In general, both interviewers and respondents
evaluate computer assisted interviewing positively, and CAI is accepted without problems.
However, interviewers should be well trained and experienced and much depends on the
interview situation. Computer assisted interviewing makes it possible to supervise interviewers
more closely and study interviewer behaviour by analyzing computer files. However,
comparative research has paid little attention to the effect of computerization on interviewer
variance and other aspects of interviewer behaviour (exceptions are Groves & Mathiowetz,
1984; Couper, et al., 1998).
There is some evidence that computerized methods of data collection improve
methodological data quality. Respondents are less inhibited and show more self-disclosure
when sensitive questions are used. But this effect may be diminishing over time, as some studies
suggest. Furthermore, there is evidence that much depends on the perception of the interview
situation by the respondent and on careful design of the total study and of the computer
interface. For instance, the distance between computers in a computer lab influences the
openness on answers; a larger distance gives more openness. Also, whether or not the typed-in
answers remain on the screen or are 'masked', and whether sounds come over a head phone or
through speakers in computer assisted self-interviews may affect answers on sensitive questions.
Systematic research on these topics will teach us more about how to use computers
optimally in data collection. In doing this, we should keep in mind that it is the human that
counts not the technology. How respondents perceive the interview situation, how large their
(mis)trust in computers is and how much they trust the survey organization, will determine the
success of computer-assisted method and especially of web surveys.
Finally, I should emphasize that computer assisted data collection is no panacea for good
data quality. It requires one to do almost everything that is needed with a good paper-and-pen
interview or questionnaire, and to add extra effort in computer implementation, in testing the
questionnaire, in designing an ergonomic screen lay-out, in extra interviewer training, and in
designing a respondent friendly and trustworthy questionnaire. However, this investment is
earned back in far less interviewer error and the error-free administration of complex
questionnaires. It also offers us the opportunity to use really complicated questionnaires with
complex routing patterns, without the help of an interviewer (cf. Saris, 1998). If special efforts
during the implementation are made and if the new possibilities computers offer us are really
used, we have the opportunity for obtaining not only better data, but clearly superior data with
computers. We should therefore use computer assisted data quality to its full potential and invest
in the development of new applications. In every survey the available tools do effect the type of
questions we can ask, and computer assisted data collection is offering us a large and
sophisticated methodological toolbox indeed. We should use this toolbox wisely!
References
Baker, R.P. (1990). What we know about CAPI: Its advantages and disadvantages. Paper
presented at the annual meeting of the American Association of Public Opinion
Research, Lancaster, Pensylvania.
Baker, R.P. (1992). New technology in survey research: Computer assisted personal
interviewing (CAPI). Social Science Computer Review, 10, 145-157.
Baker, R.P. (1998). The CASIC future. In: M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F.
Clark, J. Martin, W.L. Nicholls II, & J.M. O'Reilly (eds). Computer assisted survey
information collection (pp. 583-604). New York: Wiley.
Baker, R.P. & Bradburn, N.M. (1992). CAPI: Impacts on data quality and survey costs.
Information Technology in Survey Research Discussion Paper 10 (also presented at the
1991 Public Health Conference on Records and Statistics).
Bates, N. (2001). Internet versus mail as a data collection methodology from a high coverage
population. Proceedings of the Annual Meeting of the American Statistical
Association. August 2001.
Beckenbach, A. (1992). Befragung mit dem Computer, Methode der Zukunft?.
Anwendungsmöglichkeiten, Perspektiven und experimentelle Untersuchungen zum
Einsatz des Computers bei Selbstbefragung und persönlich-mündlichen Interviews. [In
German: Computer Assisted Interviewing, A method of the future? An experimental
study of the use of a computer by self-administered questionnaires and face to face
interviews]. Ph.D. thesis. Universität Mannheim.
Beckenbach, A. (1995). Computer assisted questioning: The new survey methods in the
perception of the respondents. BMS, 48, 82-100.
Beebe, T.J., Harrison, P.A., McRae, J.A., Anderson, R.E., & Fulkerson, J.A. (1998). An
evaluation of computer-assisted self interviews in a school setting. Public Opinion
Research, 62, 623-632.
Beullens, K. (2003). Evaluatie van een mixed-mode survey design [Evaluation of a mixed-
mode survey design]. Leuven: Katholiek Universiteit Leuven.
Bemelmans-Spork, M., Kerssemakers, F., Sikkel, D. & Van Sintmaartensdijk, H. (1985).
Verslag van het experiment 'het gebruik van draagbare computers bij persoons en
gezinsenquêtes'. Centraal Bureau voor de Statistiek. [In Dutch: Report of an experiment
on the use of portable computers in person and household surveys. Netherlands Central
Bureau of Statistics]
Blyth, B. (1998). Current and future technology utilization in European market research. In:
M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F. Clark, J. Martin, W.L. Nicholls II, &
J.M. O'Reilly (eds). Computer assisted survey information collection (pp.563-581). New
York: Wiley.
Blyth, B. (1998b). Market research and information technology; Application and innovation.
Esomar monograph 6 (introduction). Amsterdam: Esomar.
Bond, J (1991). Increasing the value of computer interviewing. Proceedings of the 1991
ESOMAR congress (Trends in data collection and analysis).
Bradburn, N.M., Frankel, M.R., Baker, R.P. & Pergamit, M.R. (1992). A comparison of CAPI
with PAPI in the NLS/Y. Chicago: NORC. Information Technology in Survey Research
Discussion Paper 9 (also presented at the 1991 AAPOR-conference, Phoenix, Arizona)
Catlin, G. & Ingram, S. (1988). The effects of CATI on costs and data quality: A comparison of
CATI and paper methods in centralized interviewing. In: R.M. Groves, P.P. Biemer,
L.E. Lyberg, J.T. Massey, W.L. Nicholls II & J. Waksberg (Eds.). Telephone survey
methodology (pp.437-456). New York: Wiley.
Clayton, R.L. & Werking, G.S. (1998). Business surveys of the future: The world wide web as a
data collection methodology. In: M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F. Clark, J.
Martin, W.L. Nicholls II, & J.M. O'Reilly (eds). Computer assisted survey information
collection (pp.543-562). New York: Wiley.
Collins, M., Sykes, W., O'Muircheartaigh, C. (1998). Diffusion of technological innovation:
Computer assisted data collection in the UK. In: M.P. Couper, R.P. Baker, J. Bethlehem,
C.Z.F. Clark, J. Martin, W.L. Nicholls II, & J.M. O'Reilly (eds). Computer assisted
survey information collection (pp.23-43). New York: Wiley.
Couper, M.P., Hansen, S.E., & Sadovsky, S.A. (1997). Evaluating interviewer use of CAPI
technology. In: L. Lyberg, P. Biemer, M. Collins, C. Dippo, N. Schwarz, & D. Trewin
(eds). Survey Measurement and Process Quality (pp. 267-285). New York: Wiley.
Couper, M.P., & Nicholls, W.L.II (1998). The history and development of computer assisted
survey information collection methods. In: M.P. Couper, R.P. Baker, J. Bethlehem,
C.Z.F. Clark, J. Martin, W.L. Nicholls II, & J.M. O'Reilly (eds). Computer assisted
survey information collection (pp.1-21). New York: Wiley.
Couper, M.P. (2000). Websurveys; the Good, the Bad, and the Ugly. University of Michigan,
Institute for Social Research, Survey Methodology Program, Working paper series #
077.
De Leeuw, E.D. (1992). Data quality in mail, telephone and face to face surveys. Amsterdam:
TT-Publikaties.
De Leeuw, E.D. (1997). Comment on K.J. Witt, Best practices in interviewing via the internet.
Sawtooth Software Conference Proceedings. Sequim, Wa: Sawtooth Software inc.
De Leeuw, E.D. (2005). To mix or not to mix data collection modes in surveys. Journal of
Official Statistics, 21, 5, 233-255 (also available on www.jos.nu.
De Leeuw, E.D. & Collins, M. (1997). Data collection method and data quality: An overview. L.
Lyberg, P. Biemer, M. Collins, C. Dippo, N. Schwarz, & D. Trewin (eds). Survey
Measurement and Process Quality (pp. 199-220). New York: Wiley.
De Leeuw, E.D., & Hox, J.J.. (2008). Self-Administered Questionnaires. In E.D. de Leeuw,
J.J. Hox, & D.A. Dillman (Eds) International handbook of survey methodology.
Psychology Press, Tailor and Francis.
De Leeuw, E., & Nicholls, W.L. II (1996). Technological innovations in data collection:
Acceptance, data quality and cost. Sociological Research Online, Vol 1, no 4,
<http:/www.socresonline.org.uk/socresonline/1/4/leeuw.html>
De Leeuw, E., Hox, J., Kef, S., & Van Hattum, M. (1997). Overcoming the problems of special
interviews on sensitive topics: Computer assisted self-interviewing tailored for young
children and adolescents. Sawtooth Software Conference Proceedings (pp.1-14),
Sequim: WA: Sawtooth
De Leeuw, E.D., Hox, J.J., & Snijkers, G. (1998). The effect of computer-assisted interviewing
on data quality. In: B. Blyth (ed). Market research and information technology;
Application and innovation. Esomar monograph 6 (pp.173-198). Amsterdam: Esomar.
De Leeuw, E.D., Nicholls, W.L.II, Andrews, S.H., Mesenbourg, T.L. (2000). The use of old
and new data collection methods in establishment surveys. The use of old and new data
collection methods in establishment surveys. Proceedings of the 4
th International
Conference on methodological issues in Official statistics. Stockholm: SCB.
De Leeuw, E., Hox, J., & Kef, S. (2003). Computer-assisted self-interviewing tailored for
special populations and topics. Field Methods, 15, 223-251
Deming,W.E. (1982). Quality, productivity and competitive position. Cambridge, Ma:
Massuchusetts Institute of Technology.
Dillman, D.A. (1978). Mail and telephone surveys; The Total Design Method. New York:
Wiley.
Dillman, D.A. (2000). Mail and internet surveys; The Tailoredl Design Method. New York:
Wiley.
Dillman, D.A. Phelps, G., Tortorra, R., Swift, K., Kohrell, J., and Berck, J. (2001). Response
rate and measurement differences in mixed mode surveys: using mail, telephone,
interactive voice response and the Internet. Draft paper at homepage of Don Dillman
(http://survey.sesrc.wsu.edu/dillman/) accessed 14 April 2005.
Edwards, B., Bittner, D., Edwards, W.S. & Sperry, S. (1993). CAPI effects on interviewers: A
report from two major surveys. Paper presented at the U.S. Bureau of the Census Annual
Research Conference, Washington D.C.
Fowler, F.J. Jr. (1991). Reducing interviewer-related error through interviewer training,
supervision, and other means. In: P.P. Biemer, R.M. Groves, L.E. Lyberg, N.A.
Mathiowetz & S.Sudman (Eds). Measurement errors in surveys (pp. 259-278). New
York: Wiley.
Groves, R.M. (1989). Survey errors and survey costs. New York. Wiley.
Groves, R.M. & Mathiowetz, N.A. (1984). Computer assisted telephone interviewing: Effects
on interviewers and respondents. Public Opinion Quarterly, 48, 356-369.
Groves, R.M. & Nicholls, W.L. II (1986). The status of computer-assisted telephone
interviewing: Part II-Data quality issues. Journal of Official Statistics, 2, 117-134.
Heberlein, T.A. & Baumgartner, R. (1978). Factors affecting response rates to mailed
questionnaires; A quantitative analysis of the published literature. American Sociological
Review, 43, 447-462.
Johnston, J. & Walton, C. (1995). Reducing response effects for sensitive questions: A computer
assisted self interview with audio, Social Science Computer Review, 13, 304-319.
Kiesler, S. & Sproull, L.S. (1986). Response effects in electronic surveys. Public Opinion
Quarterly, 50, 402-413.
Kennedy, J.M., Lengacher, J.E., & Demerath, L. (1990). Interviewer entry error in CATI
interviews. Paper presented at the International conference on measurement errors in
surveys, Tucson, Arizona, 1990.
Lozar Manfreda, K., Bosjnak, M., Berzelak, J., Haas, I., & Vehovar, V. (2007). Web surveys vs
other survey modes: A meta-analysis comparing response rates. International Journal of
Market Research
Lozar Manfreda, K., & Vehovar, V. (2008). Internet Surveys. In E.D. de Leeuw, J.J. Hox, &
D.A. Dillman (Eds) International handbook of survey methodology. Psychology
Press, Tailor and Francis.
Lozar Manfreda, K., Vehovar, V., Batagelj, Z. (2001). Web versus mail questionnaire for an
institutional survey. In A Westlake et al. The Challenge of the Internet. Association
for Survey Computing.
Martin, J. & Manners, T. (1995). Computer assisted personal interviewing in survey research.
In: R.M. Lee (Ed). Information technology for the social scientists. London: UCL Press.
Martin, J., O'Muircheartaigh, C. & Curtice, J. (1994). The use of CAPI for attitude surveys: An
experimental comparison with traditional methods. Journal of Official Statistics, 9, 641-
661.
Mitchell, D.L. (1993). a multivariate analysis of the effects of gender and computer vs papi
modes of administration on survey results. Louisiana Tech University.
McMahon, S.R., Iwamoto, M., Massoudi, M.S., Yusuf, H.R., Stevenson, J.M., Davod, F.,
Chu, S.Y., and Pickering, L.K. (2003). Comparison of e-mail, fax, and postal surveys
of pediatricians. Pediatrics, 111, 4, e299-303
Nelson, R.O., Peyton, B.L. & Bortner, B.Z. (1972). Use of an online interactive system: Its
effects on speed, accuracy, and costs of survey results, paper presented at the 18th ARF
conference, New York, november 1972.
Nicholls, W.L. II, Baker, R.P., & Martin, J. (1997). The effect of new data collection
technologies on survey data quality. In: L. Lyberg, P. Biemer, M. Collins, C. Dippo, N.
Schwarz, & D. Trewin (eds). Survey Measurement and Process Quality (pp.221-248).
New York: Wiley.
Nicholls, W.L. II, & De Leeuw, E.D. (1996). Factors in acceptance of computer assisted
interviewing methods: A conceptual and historical review. Proceedings of the section of
survey research methods. American Statistical Association (pp. 758-763).
Nicholls, W.L. II (1996). Definition and assessment of survey data quality in measuring the
effects of new data collection technologies. WAPOR-conference, 1996; published in a
abbreviated form in: Bulletin of the International Statistical Institute, proceedings of the
51st session (Istanbul), tome LVII, Book 1, 507-510.
Olsen, R.J. (1992). The effects of computer assisted interviewing on data quality. Paper
presented at the fourth Social Science Methodology conference, Trento.
Oosterveld P. and Willems, P. (2003). Two modalities, one answer? Combining Internet and
CATI surveys effectively in market research. In D.S. Fellows. Technovate.
Amsterdam: ESOMAR.
O'Reilly, J.M., Hubbard, M., Lessler, J., Biemer, P.P., & Turner, C.F. (1994). Audio Computer
Assisted Self-Interviewing: New Technology for data collection on sensitive issues and
special populations. Journal of Official Statistics, 10, 197-214.
Ramos, M., Sedivi, B.M., & Sweet, E.M. (1998). Computerized self-administered
questionnaires. In: M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F. Clark, J. Martin,
W.L. Nicholls II, & J.M. O'Reilly (eds). Computer assisted survey information collection
(pp. 389-408). New York: Wiley.
Riede, T. & Dorn, V. (1991). Zur Einsetzbarkeit von Laptops in haushaltsbefragungen in der
Bundesrepublik Deutschland [In German: Acceptance of laptops for household surveys
in Germany]. Wiesbaden: Statistisches Bundesamt. Heft 20 der Schriftenreihe
Ausgewählte Arbeitsunterlagen zur Bundesstatistik.
Samuels, J. (1994). From CAPI to HAPPI: A scenario for the future and its implications for
research. Proceedings of the 1994 ESOMAR congress (Applications of new
technologies).
Saltzman, A. (1992). Improving response rates in Disk-by-Mail Surveys, Sawtooth Software
Conference Proceedings. Evanston: Sawtooth Software.
Saris, W.E. (1998). Ten years of interviewing without interviewers: The telepanel. In: M.P.
Couper, R.P. Baker, J. Bethlehem, C.Z.F. Clark, J. Martin, W.L. Nicholls II, & J.M.
O'Reilly (eds). Computer assisted survey information collection (pp. 409-429). New
York: Wiley.
Schaefer, D.R., & Dillman, D.A. (1998). Development of a standard e-mail methodology;
Results of an experiment. Public Opinion Quarterly, 62, 378-397.
Sebestik, J., Zelon, H., DeWitt, D., O'Reilly, J.M. & McCowan, K. (1988). Initial experiences
with CAPI. Paper presented at the U.S. Bureau of the Census Annual Research
Conference, Washington, D.C.
Sikkel, D. (1998). The individual interview. In: M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F.
Clark, J. Martin, W.L. Nicholls II, & J.M. O'Reilly (eds). Computer assisted survey
information collection (pp.147-165). New York: Wiley.
Sikkel, D., Hox, J., & de Leeuw, E. (2008). Using auxiliary data for adjustment in longitudinal
research. In: P. Lynn (Ed), Methodology of longitudinal surveys. New York: Wiley. An
earlier version is available at
http://www.iser.essex.ac.uk/ulsc/mols2006/programme/data/papers/Sikkel.pdf
Sperry, S., Bittner, D. & Branden, L. (1991). Computer assisted personal interviewing on the
current beneficiary survey. Paper presented at the AAPOR 1991 conference, Phoenix,
Arizona.
Statistics Sweden (1989). Computer assisted data collection in the labour force surveys: Report
of technical tests. Stockholm: Statistics Sweden.
Schwarz, N., Strack, F., Hippler, H.J., and Bishop, G. (1991). The impact of administrative
mode on response effects in survey measurement. Applied Cognitive Psychology, 5,
193-212.
Thornberry, O., Rowe, B. & Biggar, R. (1991). Use of CAPI with the U.S. National Health
Interview Survey. Bulletin de Méthodologie Sociologique, 30, 27-43.
Tourangeau, R., & Smith, T.W. (1998). Collecting sensitive information with different modes of
data collection. In: M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F. Clark, J. Martin, W.L.
Nicholls II, & J.M. O'Reilly (eds). Computer assisted survey information collection (pp.
431-453). New York: Wiley.
Turner, C.F., Forsyth, B.H., O'Reilly, J.M., Cooley, P.C., Smith, T.K., Rogers, S.M., & Miller,
H.G. (1998). Automated self-interviewing and the survey measurement of sensitive
behaviors. In: M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F. Clark, J. Martin, W.L.
Nicholls II, & J.M. O'Reilly (eds). Computer assisted survey information collection
(pp.1-21). New York: Wiley.
Van Bastelaer, A.M.L., Kerssemakers, F.A.M. & Sikkel, D. (1988). A test of the Netherlands
continuous labour force survey with hand-held computers: Contributions to
questionnaire design. Journal of Official Statistics, 4, 141-154.
Van Bastelaer, A.M.L., Kerssemakers, F.A.M. & Sikkel, D. (1987). A test of the Netherlands
continuous labour force survey with hand-held computers: Interviewer behaviour and
data quality. In: CBS-Select 4; Automation in survey processing. Den Haag:
Staatsuitgeverij.
Van Hattum, M., & De Leeuw, E.D. (1999). A Disk-by-Mail survey of teachers and pupils in
Dutch primary schools; Logistics and data quality. Journal of Official Statistics,.
Waterton, J.J. (1984). Reporting alcohol consumption: The problem of response validity.
Proceedings of the section on survey research methods of the American Statistical
Association (pp. 664-669). Washington D.C: ASA.
Waterton, J.J. & Duffy, J.C. (1984). A comparison of computer interviewing techniques and
traditional methods in the collection of self-report alcohol consumption data in a field
survey. International Statistical Review, 2, 173-182.
Weeks, M.F. (1992). Computer-Assisted Survey Information Collection: A review of CASIC
methods and their implication for survey operations. Journal of Official Statistics, 4,
445-466.
Weisband, S. & Kiesler, S. (1996). Self-disclosure on computer forms: Meta-analysis and
implications. Tucson: University of Arizona (Available on internet:
http://www.al.arizona.edu/~weisband/chi/chi96.html).
Witt, K.J., & Bernstein, S. (1992). Best practices in Disk-By-Mail Surveys, Sawtooth Software
Conference Proceedings. Evanston: Sawtooth Software.
Witt, K.J. (1997). Best practices in interviewing via the internet. Sawtooth Software Conference
Proceedings. Sequim, WA: Sawtooth software inc.
Woijcik,, M.S. & Baker, R.P. (1992). Interviewer and respondent acceptance of CAPI.
Proceedings of the Annual Research Conference, Washington DC: US. Bureau of the
Census, 619-621.
Woijcik, M.S., & Hunt, E. (1998). Training field interviewers to use computers: Past, present,
and future trends. In: M.P. Couper, R.P. Baker, J. Bethlehem, C.Z.F. Clark, J. Martin,
W.L. Nicholls II, & J.M. O'Reilly (eds). Computer assisted survey information collection
(pp. 331-349). New York: Wiley.
Yoshimura, O., & Ohsumi, N. (2000). Some experimental surveys on the WWW Environments.
Presented at the session Internet surveys and retrieval, International Meeting of the
International Federation of Classification Societies (IFCS), Namur, Belgium, 2000
Appendix A
Taxonomy of Computer Assisted Data Collection Methods
Presented is a systematic overview of survey methods and their computer assisted
equivalents. General names: CADAC (Computer Assisted Data Collection), CASIC
(Computer Assisted Survey Information Collection, CAI (Computer Assisted Interviewing).
Data Collection Method Computer Assisted Form
Interview
Face-to-face interview CAPI (Computer Assisted Personal Interviewing)
Telephone interview CATI (Computer Assisted Telephone Interviewing)
Self-administered form:
With interviewer present CASI (computer assisted self interviewing).
V-CASI (question text on screen only: visual).
A-CASI (text on screen and also heard on audio)
Self-administered form: CSAQ (computer assisted self administered questionnaire)
Without interviewer
Mail survey equivalent DBM (Disk by Mail) and EMS (Electronic Mail Survey),
Websurvey, Internet survey
Telephone surveys TDE (Touchtone Data Entry),
equivalents IVR (Interactive Voice Response),
ASR (Automatic Speech Recognition)
T-ACASI (Telephone self-administered)
Panel research CAPAR (Computer Assisted Panel Research), Internet-panel,
Web-panel, Access panel
... For example, some respondents cited 'family' or 'money', whereas others mentioned 'being close to my mother' or 'because I am paying a mortgage'. This kind of variability is common when working with open-ended questions in computerassisted web interviews (De Leeuw, 2002). Although this variability made it impossible for us to create specific, exhaustive, and exclusive categories, we tried to be as specific as possible when applying keywords to each response. ...
... It is worth noting here that open-ended survey questions are known to result in more missing responses for particular groups. This is especially likely to be the case when using computer-assisted web interviews, and missing responses are more common among individuals with lower levels of education (De Leeuw, 2002;Díaz de Rada, 2000). The descriptive findings of the missing responses (see Table 3) confirmed the presence of some biases. ...
Article
Full-text available
The internal migration literature has identified various factors that deter migration and encourage staying, but has been less concerned with people’s own reports about what makes it difficult for them to migrate or makes them want to stay. We explore factors that make it difficult to change the place of residence—from here on denoted as constraints—reported in the Spanish survey on Attitudes and Expectations of Spatial Mobility in the Labour Force (N = 3892). These constraints were uniquely asked from all respondents through an open-ended question, regardless of their migration intentions. We find that many self-reported constraints correspond to factors that have previously been associated with decreased migration propensities. In order of frequency, respondents reported ties to family and friends, ties to their residential environment, financial limitations, and ties to work as constraints to migration. Our results further show that the likelihood of mentioning ties to family and friends as constraints decreased with age, was higher for women than for men and for people who lived close to most of their social network than for those who did not. Mentioning ties to the residential environment as constraints was positively associated with being partnered, and also with living in one’s birthplace. People who were unemployed were less likely to mention ties to work and were more likely to report financial limitations as constraints than people who had a permanent contract—whereas being self-employed was positively associated with mentioning ties to the residential environment.
... CATI surveys have been found to yield better quality results than paper-based interviews with respect to lower interviewer variability, a lower number of skip-errors, and generally fewer data-quality problems (while of course requiring a higher effort), as found by Groves and Mathiowetz (1984, p. 368), de Leeuw et al. (1995), de Leeuw (2002 and Bergman et al. (1994). The quality of the interview is also more easily controlled in a CATI setting when compared with a face-to-face interview or even a written questionnaire, as the call center supervisors listen in on the calls of the interviewers, allowing a level of standardization and quality control not achievable with other survey methods (Holbrook et al., 2003). ...
Thesis
Full-text available
Doctoral dissertation on using the cognitive surplus to crowdsource the creation and evaluation of innovation ideas. The objective is to help firms accelerate architectural innovation, which relies on making new and value-creating connections between existing solution components. Companies traditionally struggle with architectural innovation as their research and development departments are focused on deep expertise in a small number of individual disciplines rather than a broad knowledge across several disciplines. The latter is a prerequisite for discovering new and previously unexplored ways in which existing solutions can be combined to create a new value proposition. The empirical study shows that the cognitive surplus provides a valuable and mostly untapped resource that companies can leverage if they approach their customers with the appropriate incentive framework.
... This study adopted the use of Computer Assisted Personal Interviewing (CAPI) due to its relative advantages in integrating data collection, data entries, editing, coding, and cleaning into a single process over the traditional paper-and-pencil techniques [63], among other benefits [64,65]. The survey was based on a structured questionnaire imputed to digital software Kobocollect [66,67] and collected using tablets or mobile phones. ...
Article
Full-text available
This study investigated the trait preferences for cassava in the context of climate change and conflict stressors among value-chain actors in Nigeria to strengthen social inclusion and the community-resilience outcomes from breeding programs. Multi-stage sampling procedures were used to select and interview male and female value-chain participants in the Osun, Benue and Abia States. The results indicated that farmers preferred cassava traits such as drought tolerance, early bulking, multiple-product use and in-ground storability to strengthen resilience. Climate change and challenges related to social change shaped the response strategies from both genders, and influenced trait preferences, including the early re-emergence of cassava leaves, stems that had ratooning potential, and especially the root milking that was important among female respondents. The major response strategies employed by men included frequent farm visits to prevent theft and engaging in non-agricultural livelihoods. Those employed by women included backyard farming, early harvesting, having preferences for food with fewer processing steps, and depending on remittances. The resilience capacity was higher for men than for women due to their better access to assets, as well as their abilities to relocate their farms and out-migrate in search of other livelihoods. Considering gendered cassava traits, and enhancing their resilience and response strategies, can complement efforts to make breeding more socially inclusive, resilient, and anticipatory to future challenges created by climate and related social changes.
... El estudio presentado aquí tiene limitaciones. En primer lugar, es posible que se infraestime el consumo de cannabis, ya que se documenta la ocultación del consumo de drogas debido, entre otros aspectos, al sesgo de deseabilidad social (De Leeuw, 2008). En el análisis de los problemas asociados al consumo de cannabis se utilizó la escala CAST, que, aunque es un instrumento con probada capacidad de cribado en adultos, tiene limitaciones como que los puntos de corte no son universales (Legleye, Karila, Beck y Reynaud, 2007;Legleye et al., 2015). ...
Article
Cannabis is the most widely consumed illegal drug in Spain, with consumption concentrated mainly in adolescence and early adulthood. The objectives were to estimate the prevalence of cannabis use, cannabis use disorder (CUD) and dependent use in the Galician population aged 16 years and over, and to characterize cannabis users and cannabis dependent users. Data are from two cross-sectional studies from the Risk Behavior Information System conducted in 2017 (n = 7,841) and 2018 (n = 7,853). The Cannabis Abuse Screening Test (CAST) was used to identify users with CUD and/or dependent use. Prevalences were estimated and regression models were fitted to identify variables associated with cannabis use and dependent use. In 2017-2018, 2.7% (95% CI: 2.5-3.0) of the Galician population aged 16 years and over consumed cannabis at the time of the survey, with this prevalence being 9% in the 16-24 years age group. Prevalence decreased with age and was higher in males in all age groups. The prevalence of CUD in users was 69.5% (95% CI 61.1-78.1) and of dependent use it was 49.2% (95% CI 46.6-53.9). Tobacco use was the major determinant of being a cannabis user [OR = 19.8 (95% CI 13.8-28.4)] and daily cannabis use of being a dependent user [OR = 5.5 (95% CI 3.2-9.5)]. Cannabis use among the Galician population is high, especially among young people aged 16-24 years, who show the highest probability of dependent use. Prevention measures should be aimed especially at the younger population aged 16 years to curb its use and the development of consequences such as CUD and dependent use.
... A few advantages of using the CAWI method can be pointed out. These include, in particular, the relatively low cost, the possibility of including a variety of graphical and multimedia elements, which increases the attractiveness of the interview for the respondents, elimination of the potential impact of the interviewer on the research results, a significant reduction of the researcher's error potential, and, in relation to data acquisition, quick access to the acquired data and exporting them to statistical analysis software, including SPSS, as well as the possibility of conducting research virtually anywhere there is access to the internet and mobile devices [73][74][75][76]. These advantages allow the CAWI method to be widely used in numerous scientific studies, including those concerning broadly understood innovation and modern technologies [77][78][79]. ...
Article
Full-text available
The study addressed broad aspects related to digital technology platforms and renewable energy sources, including the integration of these systems and concepts. The main objective was to identify the implementation environment for a digital technology platform of renewable energy sources (RES) based on business and consumer feedback. This gives an insight into whether there is a favourable environment for implementing a RES digital technology platform. The study was based on research carried out using computer-assisted telephone interview (CATI) and computer-assisted web interview (CAWI) methods. Additionally, an alternative model of attitudes towards digital technology platforms (DTPs) built using CATREG (categorical regression) analysis was also referred to. The study found that currently, there is a positive attitude among companies, including those which install RES systems, as well as among consumers towards the implementation of DTP-based RES projects. This attitude is driven by the many benefits that can be achieved by using these platforms. However, there are some obstacles to the implementation of a digital RES platform. These relate to cyber security concerns, including computer or internet failures. However, the obstacles are not crucial for the practical implementation of the discussed platform.
... Taylor (1998) shows that this remains true for respondents with, presumably, less exposure to modern technology, such as the elderly over 70 years of age. Banks and Lauri (2000) report that the attrition rate in the British Household Panel Survey was not affected when it switched from PAPI to CAPI in 1998.The literature also indicates the potential of CAPI to reduce routing and other errors (de Leeuw, 2008). There has been a number of CAPI surveys in the developing world, an enumeration of which is beyond the scope of this paper. ...
... The first nationwide survey using similar tools was conducted in 1987 in the Netherlands [6]. Potential benefits of CAPI include saving time and costs, reducing potential errors, and increasing data quality through consistency checks [7,8]. As printing and data entry costs are eliminated in CAPI, it is a more viable option for larger surveys budgetwise [9]. ...
Article
Full-text available
Development of instruments to measure habitual dietary intake in large epidemiological studies has been investigated extensively. The purpose of this study was to develop a computer-assisted personal interview system (CAPI) system for conducting dietary assessment. A 168-item food frequency questionnaire (FFQ), originally developed for the Tehran Lipid and Glucose Study, is used widely in food and nutrition studies in Iran. In addition to measurement errors at data recording and entry levels, the printed form is time-consuming and costly, both financially and environmentally. This technical report introduces a computer-assisted personal interviewing (CAPI) program to collect food and nutrition data using the Iranian 168-item FFQ. The U.S. Census Bureau's CSPro software was used to construct the CAPI application. The application runs on Android devices and computers with Microsoft Windows operating systems. The language of the CAPI is Farsi. This easy-to-use CAPI tool attempts to reduce time, cost, and human error in nationwide and local nutrition research. Citation: Sayyed Reza Sobhani, Hassan Eini-Zinab. Developing a computer-assisted personal interviewing tool for food and nutrition surveys based on the food frequency questionnaire in Iran. J Nutr Sci & Diet 2017; 3(3): 31-34.
... Du questionnaire papier au questionnaire électronique L'utilisation de questionnaires électroniques a débuté dans les pays à revenu élevé dans les années 1980. La baisse sensible du coût des ordinateurs et des tablettes ainsi que l'amélioration des logiciels d'enquête ont contribué à étendre l'utilisation des CAPI aux enquêtes socioéconomiques dans les pays du Sud.De nombreuses études récentes ont montré que les informations obtenues avec un questionnaire électronique étaient de bien meilleure qualité que celles obtenues avec un questionnaire papier(voir Caeyers et al., 2012 ;King et al., 2013 ;Leeuw, 2008 ;Leisher, 2014 ;MacDonald et al., 2016).Plusieurs raisons expliquent la meilleure performance des enquêtes CAPI. Une première source d'efficacité réside dans la gestion automatique des filtres (routage) lors de la lecture du questionnaire. ...
Article
Full-text available
Comprendre les multiples dimensions du processus de développement repose sur un besoin fondamental : des données de qualité. Cet article présente les progrès récents des protocoles d’enquête auprès des ménages qui s’attachent à résoudre certaines difficultés de collecte des données spécifiques aux pays à revenu faible et intermédiaire. Quatre dimensions du processus d’enquête sont explorées : l’échantillonnage, la sélection des répondants au sein des ménages, le mode d’administration du questionnaire et le contrôle des erreurs de mesure. Chacune de ces phases a fait l’objet d’avancées méthodologiques importantes. La première est l’apport des nouvelles technologies satellitaires et informatiques à la sélection de l’échantillon lorsque les bases de sondage sont inexistantes ou inutilisables. La seconde repose sur l’utilisation de supports informatiques pour l’administration des questionnaires. La troisième réside dans l’exploration de différentes variantes d’interrogation grâce aux méthodes de l’économie expérimentale (période de rappel, modes d’administration du questionnaire, stratégie d’interrogation, etc.). La quatrième correspond à l’introduction de nouvelles thématiques liées aux changements des modes de consommation imputables à l’urbanisation et à l’organisation du travail.
Article
Full-text available
Data collection constitutes a methodological process of collecting, collating, and analyzing information to present solutions that are relevant to research questions. There are a myriad of data collection methodologies (Schick-Makaroff et al., 2016). The data collection aspect of research is mutual to all fields of research including Information Technology (IT). While the data collection methods vary, they are founded on some basic data collection tools and focus on capturing accurate data that remains the same (Kabir, 2016). The objective of all data collection/requirements determinations is to get access to quality evidence that leads to quality data analysis and permits the development of credible and convincing answer to research questions or solutions (Goertzen, 2017). Regardless of the area or preference of study, either quantitative or qualitative, accurate requirements, accurate requirements gathering methods and analysis techniques are critical to ensuring the integrity of research. In this paper, robust data collection methods and analysis techniques in Information Technology are unveiled.
Chapter
Despite the risk and the threats posed through COVID-19 pandemic in successfully conducting censuses and surveys through delays, interruptions, diversion of funding, the Palestinian Central Bureau of Statistics (PCBS) is determined to continue collecting data on a timely basis and of a quality that is fit for purpose. The contribution of this paper is twofold. Firstly, it introduces the adoption of CATI (Computer Assisted Telephone Interview) mode of data collection during the pandemic in the current surveys and the influence it may has on data quality. It also shed more light on the main differences between CATI and CAPI (Computer-assisted personal interviewing) modes in household surveys in particular. Secondly, it focuses on proposing a strategy regarding the samples frame of household surveys conducted using CATI, through adopting a new methodology as an additional data resource for carrying out surveys during this period. This methodology is based on the integrative role played by PCBS and other National Statistical System pillars (NSS) including a relevant private sector company; in order to establish a central geographical database (Geodatabase) of households with fixed line numbers at PCBS, where data can be used to develop a more comprehensive sample frame for CATI-based household surveys.
Article
Full-text available
One question that arises when discussing the usefulness of web-based surveys is whether they gain the same response rates compared to other modes of collecting survey data. A common perception exists that, in general, web survey response rates are considerably lower. However, such unsystematic anecdotal evidence could be misleading and does not provide any useful quantitative estimate. Metaanalytic procedures synthesising controlled experimental mode comparisons could give accurate answers but, to the best of the authors' knowledge, such research syntheses have so far not been conducted. To overcome this gap, the authors have conducted a meta-analysis of 45 published and unpublished experimental comparisons between web and other survey modes. On average, web surveys yield an 11% lower response rate compared to other modes (the 95% confidence interval is confined by 15% and 6% to the disadvantage of the web mode). This response rate difference to the disadvantage of the web mode is systematically influenced by the sample recruitment base (a smaller difference for panel members as compared to one-time respondents), the solicitation mode chosen for web surveys (a greater difference for postal mail solicitation compared to email) and the number of contacts (the more contacts, the larger the difference in response rates between modes). No significant influence on response rate differences can be revealed for the type of mode web surveys are compared to, the type of target population, the type of sponsorship, whether or not incentives were offered, and the year the studies were conducted. Practical implications are discussed.
Chapter
Errors in Human-Computer Interaction Design and Data Collection Analyses Evaluation of Mock Interview Keystroke Files Evaluation of Production Keystroke Files Variation in Interviewer Keystroke Behavior Exploration of Erroneous Function Key Use Discussion and Conclusions Acknowledgments
Article
Passation assistée par ordinateur - Méthodes nouvelles d'enquête vues du point de vue des répondants. Après une analyse de la littérature scientifique sur l'effet du mode d'enquête sur l'attitude des répondants envers des enquêtes assistées par ordinateur et sur la qualité de ces données, l'auteur présente les résultats d'une enquête en Allemagne (N=152). Cette étude compare l'Interview Personnel Assisté par Ordinateur ("Computer-Assisted Personal Interviewing" - CAPI) et l'Auto-passation de Questionnaire par Ordinateur ("Computerized Self-Administered Questionnaires" - CSAQ) avec le traditionel Interview avec Papier et Crayon ("Paper and Pen Interviewing" - PAPI). L'objet de l'étude était l'acceptabilité par les répondants et la qualité des données par rapport à ces nouvelles technologies. En conclusion. il y a des indications que la passation assistée par ordinateur fournit des données de meilleure qualité. En revanche, seulement quelques études méthodologiques bien menées sur le CAPI et le CSQ ont été réalisées et d'autres recherches dans ce domaine sont vivement encouragées.