ArticlePDF Available

Comparing Check-All and Forced-Choice Question Formats in Web Surveys

Authors:

Abstract and Figures

For survey researchers, it is common practice to use the check-all question format in Web and mail surveys but to convert to the forced-choice question format in telephone surveys. The assumption underlying this practice is that respondents will answer the two formats similarly. In this research note we report results from 16 experimental comparisons in two Web surveys and a paper survey conducted in 2002 and 2003 that test whether the check-all and forced-choice formats produce similar results. In all 16 comparisons, we find that the two question formats do not perform similarly; respondents endorse more options and take longer to answer in the forced-choice format than in the check-all format. These findings suggest that the forced-choice question format encourages deeper processing of response options and, as such, is preferable to the check-all format, which may encourage a weak satisficing response strategy. Additional analyses show that neither acquiescence bias nor item nonresponse seem to pose substantial problems for use of the forced-choice question format in Web surveys.
Content may be subject to copyright.
Public Opinion Quarterly, Vol. 70, No. 1, Spring 2006, pp. 66–77
doi:10.1093/poq/nfj007
© The Author 2006. Published by Oxford University Press on behalf of the American Association for Public Opinion Research.
All rights reserved. For permissions, please e-mail: journals.permissions@oxfordjournals.org.
COMPARING CHECK-ALL AND FORCED-CHOICE
QUESTION FORMATS IN WEB SURVEYS
JOLENE D. SMYTH
DON A. DILLMAN
LEAH MELANI CHRISTIAN
MICHAEL J. STERN
Washington State University
Abstract For survey researchers, it is common practice to use the
check-all question format in Web and mail surveys but to convert to the
forced-choice question format in telephone surveys. The assumption
underlying this practice is that respondents will answer the two formats
similarly. In this research note we report results from 16 experimental
comparisons in two Web surveys and a paper survey conducted in 2002
and 2003 that test whether the check-all and forced-choice formats pro-
duce similar results. In all 16 comparisons, we find that the two question
formats do not perform similarly; respondents endorse more options and
take longer to answer in the forced-choice format than in the check-all
format. These findings suggest that the forced-choice question format
encourages deeper processing of response options and, as such, is pref-
erable to the check-all format, which may encourage a weak satisficing
response strategy. Additional analyses show that neither acquiescence
bias nor item nonresponse seem to pose substantial problems for use of
the forced-choice question format in Web surveys.
This research note is a revised version of a paper presented at the 2005 American Association for
Public Opinion Research meeting, May 12–15, Miami Beach, FL. Analysis of these data was sup-
ported by funds provided to the Washington State University Social and Economic Sciences
Research Center under Cooperative Agreement #43-3AEU-1-80055 with the U.S. Department of
Agriculture National Agricultural Statistics Service, supported by the National Science Foundation,
Division of Science Resource Statistics. Data collection was financed by the Social and Economic
Sciences Research Center and the Gallup Organization. A more detailed analysis of the data analyzed
here, including exact question formats and tabular comparisons referenced in this article, is available
from a report to the National Science Foundation by the same authors: “Comparing Check-All and
Forced-Choice Question Formats in Web Surveys: The Role of Satisficing, Depth of Processing, and
Acquiescence in Explaining Differences,” Social and Economic Sciences Research Center Technical
Report 05-029, available online at http://survey.sesrc.wsu.edu/dillman/papers.htm (accessed
December 31, 2005). Address correspondence to Jolene D. Smyth; e-mail: jsmyth@wsu.edu.
Question Formats in Web Surveys 67
A common question format in Web surveys is the “check all that apply”
question, for which respondents are asked to mark all that apply from among a
list of options. The check-all-that-apply question format is especially compat-
ible with Web surveys because of the availability of the HTML-box Web
design feature. Rather than limiting respondents to only one answer, this fea-
ture allows multiple items to be selected, making the design of check-all ques-
tions quite efficient. In telephone surveys, however, the check-all format is
considered awkward and is seldom used. Instead, a forced-choice question
format, where respondents provide an answer (e.g., yes/no) for each item in
the list, is typically employed. While the forced-choice format is more effi-
cient for telephone surveys, self-administered survey designers have avoided
its use partially due to the concern that respondents will treat forced-choice
questions as check-all items by marking answers only in the “yes” category
and ignoring the “no” category. The ability to require responses to each item
on Web surveys could override that concern; however, error messages requir-
ing that each item to be answered may irritate respondents and cause them to
terminate their participation in the survey (Best and Krueger 2004).
As a result of these tensions, it has become common practice to convert
between the check-all and forced-choice formats when switching between
self-administered and interview surveys and to assume that these question formats
are functional equivalents (Rasinski, Mingay, and Bradburn 1994). Sudman
and Bradburn (1982), however, argue that the response task in the two ques-
tion formats is fundamentally different. The check-all format presents the
options as a set of items from which the respondent should choose those that
apply. Conversely, the forced-choice format asks respondents to provide an
answer (yes or no) for each response option, a task that should encourage
respondents to consider and come to a judgment about each item individually.
This difference in response task may lead respondents to use different stra-
tegies for answering when presented with check-all and forced-choice question
formats. In the check-all format, for example, the task of considering the set of
items and selecting those that apply may encourage respondents to avoid
expending the time and effort required to answer the question optimally by
choosing only the first response option(s) they can reasonably justify, a form
of weak satisficing (Krosnick 1991, 1999; Krosnick and Alwin 1987). Using
this strategy, respondents can quickly satisfy the requirements of the question
and then proceed to the next question without giving adequate attention to the
remaining response options. Such a response strategy should manifest itself in
relatively fast response times, as well as in patterns of primacy where options
are more likely to be selected when they appear near the top of the list than
when they appear near the bottom of the list (Krosnick 1999).
In contrast, to satisfy the requirements of a forced-choice question with its
explicit yes/no categories, respondents have to commit to an answer for every
item. Because it encourages respondents to elaborate on and more deeply
process every option, this question format should discourage a satisficing
68 Smyth et al.
response strategy and take longer to answer (Sudman and Bradburn 1982). In
addition, it should result in more options being endorsed, both because
respondents process throughout the list and because they more deeply process
each individual response option, making them more likely to think of reasons
the options apply (Krosnick 1992; Sudman, Bradburn, and Schwarz 1996).
In addition to different response tasks, Sudman and Bradburn (1982) point
out that the interpretation of the responses themselves also differs across these
two question formats. For example, respondents to check-all questions may
leave an option blank for a number of reasons: (1) the option does not apply to
them, (2) they are neutral or undecided about it, or (3) they overlooked it.
Consequently, in a check-all question we cannot conclude that a blank option
is equivalent to “does not apply.” In contrast, the addition of the explicit “no”
category in the forced-choice format allows for finer differentiation of the
meaning of responses; options left blank can clearly be interpreted as missing.
However, the explicit “no” category in forced-choice questions may have
unintended consequences if respondents who are actually neutral or otherwise
undecided on a particular option are more likely to agree than disagree, a form
of agreeing response bias or acquiescence (Schuman and Presser 1981). Such
a tendency to agree may result in respondents marking “yes” in order to avoid
being disagreeable (by marking “no”), which would result in the forced-
choice format artificially yielding more options marked affirmatively.
The comparability of responses from check-all and forced-choice questions
has been addressed in only one published experiment of which we are aware.
Rasinski, Mingay, and Bradburn (1994) compared these question formats in a
mail survey field test for round three of the 1988 National Educational Longi-
tudinal Study. Half of the respondents were assigned a version of the survey in
which three questions were formatted as check-all-that-apply questions and
the other half were assigned a version in which these same three questions
were formatted as forced-choice questions with yes/no categories. For all
three items the mean number of options marked per respondent in the forced-
choice version was significantly higher than the mean number marked in the
check-all version (3.03 vs. 2.86, p = .002; 2.47 vs. 1.53, p = .001; and 1.18 vs.
0.96, p = .001).
Using the results of experiments from two Web surveys and a paper survey
comparison, our purpose in this research is to extend the work of Rasinski,
Mingay, and Bradburn (1994) in two important ways. First, we extend their
work to Web surveys by examining check-all and forced-choice question for-
mats in the Web mode. Second, whereas the earlier study limited its analyses
to behavioral and factual questions, we include both behavioral/factual (e.g.,
resources used at Washington State University, student group participation,
and food vendors used on campus) and opinion-based questions (e.g., descrip-
tions of the Washington State University Pullman campus, admittance criteria,
and university budget adjustments) to ascertain whether the effects of switching
between formats are related to the type of question being asked. In addition to
Question Formats in Web Surveys 69
these extensions, we briefly report findings related to depth of processing and
satisficing, acquiescence, and item nonresponse in check-all and forced-
choice questions.
Procedures
We compare check-all and forced-choice question formats using up to four
experimental variations of substantively different questions from two Web
surveys and one paper survey, all designed to assess the undergraduate experi-
ence at Washington State University (WSU). Design and implementation
details for all three surveys are summarized in table 1. In all of the surveys
students were randomly assigned to a version of the questionnaire, and all
respondents received a two-dollar incentive with the survey request. Each
Web respondent also received a unique identification number that he or she
was required to use to access the survey. The Web surveys were designed simi-
larly: questions appeared on their own page in black text against a colored
background; answer spaces appeared in white so as to provide contrast
between the answer spaces and the background; screens were constructed with
HTML tables using proportional widths to maintain the visual aspect of the
screen regardless of individual users’ window sizes; and font size and style
were automatically adjusted using Cascading Style Sheets to accommodate
various users’ screen resolutions. In the paper survey questions also appeared
in black text against a colored background with white answer spaces. Replicas
of the questions as formatted in the studies are available in an online appendix
to this article.
Findings
Results in the first three columns of table 2 unequivocally support the expectation
that the forced-choice format yields more options marked affirmatively than
the check-all format. Overall in the check-all formatted versions, an average
Table 1. Design and Implementation Details for Surveys
NOTE.—The response rate reported for the three studies is American Association for Public
Opinion Research (AAPOR) response rate 2 (AAPOR 2004).
Survey Date Experimental
Versions Number
of Questions Sample
Size Completed
Responses Response
Rate
Paper Spring 2001 4 41 1,800 1,042 58%
Web Spring 2003 4 21 3,004 1,591 53%
Web Fall 2003 4 25 3,045 1,705 56%
Table 2. Comparisons Between the Check-All and Forced-Choice Format for Mean Number of Options Marked Affirmatively
and Mean Time (second) Spent Answering Questions
Mean Number of Options Marked Affirmatively Mean Time Spend Answering
Check-All Forced-Choice 1-Sided t-test Check-All Forced-Choice 1-Sided t-test
Web Experiment #1: Spring 2003
Q11: Resources used at WSU (10)
Check vs. Used/Not Used 5.4 5.7 3.41* 15.9 25.0 13.42*
Check (R) vs. Used/Not Used (R) 5.6 6.2 5.43* 19.2 27.9 14.45*
Q13: Cougar varsity sports fan (15)
Check vs. Yes/No 2.6 3.6 4.43* 14.1 30.5 –13.92*
Check vs. Fan/Not a Fan 2.6 3.9 5.87* 14.1 28.0 17.73*
Q16: Student group participation (11)
Check vs. Yes/No 1.9 2.6 5.01* 16.5 27.1 13.03*
Check vs. Participate/Not Participate 1.9 2.4 3.67* 16.5 27.0 13.16*
Overall Mean For Survey #1 3.3 4.1 4.96* 16.1 27.6 9.40*
Web Experiment #2: Fall 2003
Q3: Descriptions of campus (12)
Check vs. Yes/No 4.4 6.6 17.68* 11.6 35.5 22.26*
Q6: Admittance criteria (14)
Check vs. Yes/No 5.0 6.1 6.27* 13.0 42.4 20.35*
Check (R) vs. Yes/No (R) 5.2 5.9 4.46* 13.2 44.2 20.64*
Table 2. (Continued)
NOTE.—The number of response options offered for each question is displayed in parentheses. “(R)” denotes treatments in which the options were presented in
reverse order (inverted). Time outliers were removed at two standard deviations above the mean.
* p .05.
Mean Number of Options Marked Affirmatively Mean Time Spend Answering
Check-All Forced-Choice 1-Sided t-test Check-All Forced-Choice 1-Sided t-test
Q11: Univ. budget adjustments (14)
Check vs. Yes/No 3.5 4.6 7.24* 14.4 54.5 24.10*
Q14: Cougar varsity sports fan (15)
Check vs. Yes/No 3.1 4.5 6.19* 5.5 27.1 28.02*
Q16: Food vendors on campus (9)
Check vs. Yes/No 4.4 5.0 3.56* 5.2 12.5 19.57*
Check (R) vs. Yes/No (R) 4.6 5.1 3.79* 5.6 12.6 22.30*
Q20: Possessions in Pullman (13)
Check vs. Yes/No 6.4 6.7 1.94* 7.8 20.2 19.79*
Check (R) vs. Yes/No (R) 6.6 6.9 1.61 8.0 21.7 22.79*
Overall Mean For Survey #2 4.8 5.7 4.44* 9.4 30.1 5.41*
Paper Experiment: Spring 2002
Q5: Cougar varsity sports fan (15)
Check vs. Yes/No 2.6 3.8 5.94* N/A N/A N/A
Overall Mean (All Surveys) 4.1 5.0 18.57* N/A N/A N/A
72 Smyth et al.
of 4.1 options were marked per question. In the forced-choice versions, the
average number of options marked per question was significantly higher at 5.0
(t = –18.57, p = .000). Fifteen of the sixteen comparisons were significantly
different in the expected direction, and the sixteenth approached significance
(p = .054). Moreover, 91 percent of response options were marked affirma-
tively more often when they appeared in the forced-choice format than when
they appeared in the check-all format.
Not only did conducting the surveys via the Web allow us to extend Rasinski
and colleagues’ (1994) findings to a new mode, it also allowed us to collect
paradata (Heerwegh and Loosveldt 2004) to examine how much time respond-
ents spent on each question format.1 As a result, we can begin to assess some
explanations for the finding of more options being marked affirmatively in the
forced-choice format. The last three columns in table 2 indicate that in all
instances respondents to the forced-choice formatted questions spent signifi-
cantly more time responding than did respondents to the check-all formatted
questions. At minimum, respondents spent 45 percent longer on the forced-
choice format, and on average they spent two and a half times longer. Some of
this additional time was undoubtedly spent marking the “no” category in the
forced-choice questions, a step that is not required on the check-all format;
however, the magnitude of the time differences between formats suggests that
respondents spent more time on the forced-choice format independent of this
extra mechanical response step. These findings support the claim of Sudman
and Bradburn (1982) that items are subject to deeper processing in the forced-
choice format than the check-all format, and they suggest that respondents to
the check-all formatted questions may be employing a satisficing response
strategy.
Support for this claim is bolstered by two additional findings. First, as
shown in figure 1, respondents who spent over the mean response time on
check-all questions marked significantly more answers on average than those
who spent the mean response time or less (5.6 vs. 3.7). In fact, these respond-
ents marked as many and often more options than all respondents to the
forced-choice questions (overall means: 5.6 vs. 5.0, respectively), suggesting
that those spending more time on check-all questions were processing the
response options more deeply and thus finding a greater number of response
options that applied to them. In contrast, figure 2 shows that forced-choice
respondents using greater than the mean response time did not mark signifi-
cantly more options for most questions (15 of 19) than their counterparts who
used the mean response time or less (5.2 vs. 5.0). These findings suggest that
the additional time spent on the forced-choice format that we see in table 2 is
1. The paradata were collected slightly differently in the two Web surveys. Specifically, in the
first Web survey the time is measured from when the page loaded to when the respondent clicked
their last response. Response time in the second survey is measured from when the page loaded to
when the respondent clicked the “submit” button. Comparisons within surveys should not be
affected by this programming difference.
Question Formats in Web Surveys 73
sufficient for respondents to more deeply process all of the response options,
such that spending even more time does not lead to more options being
marked.
Figure 1. Mean number of options marked by those taking above and
below the mean response time in the check-all format.
0
1
2
3
4
5
6
7
8
Q11 Q11 Q13 Q13 Q16 Q3 Q6 Q6 Q11 Q14 Q16 Q16 Q20 Q20
Mean Time and Below Above Mean Time
**
**
*
*
**
**
**
**
* p .05
Web #1 Web #2
Figure 2. Mean number of options marked by those taking above and
below the mean response time in the forced-choice format.
0
1
2
3
4
5
6
7
8
Q11 Q11 Q13 Q1 3 Q16 Q16 Q3 Q3 Q3 Q6 Q6 Q11 Q14 Q14 Q1 4 Q16 Q16 Q20 Q20
Mean Time and Below Above Mean Time
*
**
*
* p .05
Web #1 Web #2
74 Smyth et al.
Second, for the check-all respondents who spent the mean response time or
less, eight of ten questions presented in original and reverse order showed that
options were significantly more likely to be endorsed when they appeared in
the first three positions in the list than when they appeared in the last three
positions (analysis not shown).2 These patterns of primacy suggest that
respondents who spend less than the mean amount of time responding to the
check-all format may be employing a weak satisficing strategy. In contrast,
only one such comparison resulted in significant primacy patterns for check-
all respondents who spent over the mean response time.
In additional analyses (not shown) we tested for acquiescence in the forced-
choice format by including a third category, “don’t know” or “neutral,” with
the yes/no categories for two questions (descriptions of WSU Pullman campus
and Cougar varsity sports fan).3 If neutral or undecided respondents are acqui-
escing by choosing “yes” to avoid being disagreeable, we would expect to see
the third category drawing responses from the “yes” category when we com-
pare the yes/no/don’t know format to the original yes/no format. The addition
of the third category did not, however, draw responses from the “yes” cat-
egory for either question. In fact, for the Cougar varsity sports fan question,
the “neutral” category drew responses predominantly from the “no” category.
These findings indicate that “neutral” or “don’t know” respondents did not
choose the “yes” category in an effort to avoid rejecting items.
Finally, very few respondents treated forced-choice formatted questions as
check-all questions by ignoring the “no” category and marking only within the
“yes” category. Across all 24 forced-choice treatments included in the three
surveys, the mean percentage of respondents who treated forced-choice ques-
tions as check-all questions was only 2.7. However, because two of the ques-
tions did have high percentages (up to 11.3 percent) of respondents using this
response strategy, we investigated what made these particular questions more
likely to produce check-all response patterns. We hypothesized that forced-
choice questions based on opinions discourage the treatment of forced-choice
questions as check-all questions because respondents are unlikely to have pre-
formed judgments readily available to answer them and, therefore, will need
extra time to form a judgment (Sudman, Bradburn, and Schwarz 1996). Thus,
opinion-based questions require more consideration, which will slow the
respondent down. In contrast, respondents are more likely to have information
readily available to answer behavior and fact-based questions. As a result,
these questions may facilitate “quick clicking,” resulting in a higher likeli-
hood of respondents ignoring the “no” category.
2. Tables for all “not shown” analyses are available in Smyth et al. (2005).
3. Respondents who marked all of the options “yes” were excluded from these percentages as we
assume they sincerely meant “yes” on all options and were not treating the question as a check-all.
An additional question, Q24, is included in this analysis that is not included in previous analyses.
There is no check-all treatment for this question, which precludes its inclusion in previous analy-
ses, but that limitation is not relevant for the current analysis.
Question Formats in Web Surveys 75
The mean percentage of respondents who treated the forced-choice ques-
tions as check-all questions is 3.47 for the behavior/fact-based questions and
only 1.58 for the opinion-based questions (one-sided t = –1.55, p = .067),
which suggests some support for this explanation. In addition, the behavior/
fact-based questions took, on average, 23.15 seconds to complete, while the
opinion-based questions took 42.16 seconds to complete (one-sided t = 4.79,
p = .002). Together, these findings suggest that compared with the behavior/
fact-based questions, respondents gave more consideration to (or at least took
longer to process) the opinion-based questions, which may have discouraged
their treatment of them as check-all questions. Two points should be noted for
this analysis. First, the wording of the questions included the positive and nega-
tive categories as part of the question stem (e.g., “Do you think that each
description does or does not describe this campus?”) to avoid prose that would
encourage respondents to mark only “yes” answers (e.g., “Please check which
of these sports you are a fan of”). We cannot speak to the effect of the forced-
choice format on item nonresponse for questions that do not use this approach,
but we think that including the positive and negative categories in the question
stem is a generally advisable technique. Second, the time data should be inter-
preted with caution because there are substantial differences in the length of
the question stems and the number and length of response options across these
two types of categories that may have increased reading and comprehension
time.
Discussion and Conclusions
Consistent with experimental results from a mail self-administered survey
reported by Rasinski, Mingay, and Bradburn (1994) our test of ten items in
two Web surveys and a paper comparison uniformly support the hypothesis
that the forced-choice format results in more options being selected. Our
results included item-order reversals, items with varying numbers of response
options (ranging from 9 to 15), replication of one item across all three surveys,
and opinion as well as behavioral items. Together with previous findings,
these data strongly suggest that when self-administered surveys present
respondents with the forced-choice format instead of the check-all format,
respondents will select a greater number of options, regardless of question
type.
Additional analyses suggested that the forced-choice format, as proposed
by Sudman and Bradburn (1982), does lead respondents to more deeply process
the response options, whereas a large portion of respondents to the check-all
formatted questions appear to be spending less time and may not be process-
ing all of the response options. Overall, the forced-choice respondents spent
significantly more time responding to the questions, and among these respond-
ents there was no difference in the number of options marked affirmatively by
76 Smyth et al.
response time. In contrast, respondents who answered check-all questions
quickly marked significantly fewer options and appear to have employed a
weak satisficing response strategy (as evidenced by patterns of primacy),
more so than their counterparts who answered these questions more slowly.
Taken together, these findings support the explanation that the increase in the
mean number of response options marked in the forced-choice format com-
pared with the check-all format is the result of deeper processing. In addition,
they suggest that there is some level of “optimal” processing that respondents
to the forced-choice format and those using over the mean amount of time in
the check-all format are more likely to reach than those processing the check-
all questions quickly. These findings raise concerns about the use of the
check-all format because on average 66 percent of check-all respondents spent
at or below the mean response time and, therefore, may not have reached that
“optimal” processing level.
It appears that the use of the forced-choice question format, by virtue of the
fact that it asks for consideration of every response option, is a desirable alter-
native to the use of the check-all question format for multiple-answer ques-
tions in Web surveys. The forced-choice format seems to promote deeper
processing and allows for finer differentiation of meaning because options are
explicitly marked negatively, but it does not encourage acquiescence, and it is
not prone to high item nonresponse. Although the evidence that the forced-
choice format produces “better” (Sudman and Bradburn 1982, p. 168) and
more accurate responses is increasing, like Rasinski, Mingay, and Bradburn
(1994) we lack external validation checks for our data and therefore cannot
say with certainty that the forced-choice format produces more accurate
responses. As such, this is an issue in need of further research.
In addition to external validation checks, an important next step in this
research is to compare the use of the forced-choice format in aural (e.g., tele-
phone) surveys with its use in visual, self-administered surveys. Although we
do not yet know how the check-all and forced-choice question formats perform
across modes, the evidence reported here from self-administered surveys
clearly suggests that the forced-choice and check-all formats are not functional
equivalents. These findings give ample reason to be concerned about the com-
mon practice of automatically converting between check-all and forced-
choice formats when switching between self-administered and aural modes
and about combining data across these two formats in mixed-mode surveys.
References
American Association for Public Opinion Research (AAPOR). 2004. Standard Definitions: Final
Disposition of Case Codes and Outcome Rates for Surveys. 3d ed. Lenexa, KS: AAPOR.
Best, Samuel J., and Brian Krueger. 2004. Internet Data Collection. Thousand Oaks, CA: Sage.
Heerwegh, Dirk, and Geert Loosveldt. 2002. “Describing Response Behavior in Web Surveys
Using Client Side Paradata.” Paper presented at the International Workshop on Web Surveys,
Mannheim, Germany.
Question Formats in Web Surveys 77
Krosnick, Jon A. 1991. “Response Strategies for Coping with the Cognitive Demands of Attitude
Measures in Surveys.” Applied Cognitive Psychology 5:213–36.
———. 1992. “The Impact of Cognitive Sophistication and Attitude Importance on Response-
Order and Question-Order Effects.” In Context Effects in Social and Psychological Research,
ed. Norbert Schwarz and Seymour Sudman, pp. 203–18. New York: Springer-Verlag.
———. 1999. “Survey Research.” Annual Review of Psychology 50:537–67.
Krosnick, Jon A., and D. F. Alwin. 1987. “An Evaluation of a Cognitive Theory of Response-
Order Effects in Survey Measurement.” Public Opinion Quarterly 51:201–19.
Rasinski, Kenneth A., David Mingay, and Norman M. Bradburn. 1994. “Do Respondents Really
‘Mark All That Apply’ on Self-Administered Questions?” Public Opinion Quarterly 58:400–408.
Schuman, Howard, and Stanley Presser. 1981. Questions and Answers in Attitude Surveys Exper-
iments on Question Form, Wording, and Context. New York: Academic Press.
Smyth, Jolene D., Don A. Dillman, Leah Melani Christian, and Michael J. Stern. 2005. “Comparing
Check-All and Forced-Choice Question Formats in Web Surveys: The Role of Satisficing,
Depth of Processing, and Acquiescence in Explaining Differences.” Social and Economic
Sciences Research Center Technical Report #05-029, Washington State University. Available
online at http://survey.sesrc.wsu.edu/dillman/papers.htm (accessed December 31, 2005).
Sudman, Seymour, and Norman M. Bradburn. 1982. Asking Questions. San Francisco: Jossey-
Bass.
Sudman, Seymour, Norman M. Bradburn, and Norbert Schwarz. 1996. Thinking about Answers:
The Application of Cognitive Processes to Survey Methodology. San Francisco: Jossey-Bass.
... OEQs are that they can be more informative (Symoneaux, Galmarini, & Mehinagic, 2012;Chen et al., 2019;Smyth, Dillman, Christian, & Stern, 2006), and allow for more freedom, newer insights, and identification of salient concerns (Chen et al., 2019;Sharma et al., 2019). At the same time, OEQs need more cognitive processing to respond, need to be able to recall terminology to describe the construct (e.g., flavor, texture, appearance, etc.), and require a higher order of thinking (Melovitz Vasan, DeFouw, Holland, & Vasan, 2018;Zuell, Menold, & Körber, 2015). ...
Research
Twelve cultivars of potatoes were used for open-ended question (OEQ) method investigation. OEQs’ were examined for answer box number effect, questioning type, and compared to check-all-that-apply (CATA) in the difficulty of information generation for sensory profile development. Up to four small list-style answer boxes were recommended for OEQ information collection irrespective of questioning type. A focused questioning technique on specific modalities was found to generate information that is more actionable over-generalized questioning. The frequency of abstract words was lowest in texture focused open-ended questioning, followed by aroma or flavor. Overall, OEQ's generated rich information but were difficult to accomplish (more cognitively challenging to respond than CATA, potential ambiguous interpretation of certain terms and dimensionality reduction through data analysis). Irrespective of method type, some cultivars of potato (CO99076-6R, Purple Majesty, AC99330-1PY, and Rio Colorado) were found to have similar textures whereas Purple Majesty and Masquerade were similar in flavor. Purple Majesty, Masquerade, CO99076-6R, and AC99330-1PY were liked for their texture. Masquerade cultivar was highly liked for aroma, flavor, and texture over others.
... Individual preference data uses a yes/no format for each recommendation due to its relative unaffectedness to disproportionate answering caused by answer item ordering, allowing better chances of increasing data accuracy. [6] Open-ended explanations provide a more detailed view behind respondents' preferences. The Indonesian Government provides a guide on arranging cemetery trees, listed in Minister of Public Works' Regulation (Permen PU) No. 5 of 2008. ...
Article
Full-text available
Jakarta is among the cities severely affected by climate change, hence the increasing urgency of planting more trees in the city. Existing green spaces, including cemeteries, can provide spaces to plant more trees. Public cemeteries in Jakarta contribute 21.66% to the sum of the city’s open green space area, suggesting their importance as green spaces in Jakarta. However, planting more trees in Jakarta’s cemeteries may face problems, such as excessive need for burials. This condition generates a need to search for ideas to arrange trees from various cultures and contexts, while considering Jakarta’s citizen aspirations. This study aims to explain numerous spatial arrangements of trees within cemeteries between different places of the world, as well as to collect tree arrangement preferences from Jakarta citizens. This study is carried out online and depends mainly on two methods: literature analysis and survey of fifty three Jakarta citizens. The literature analysis explains the diverse contexts of tree arrangements, ranging from traditional beliefs to legal and functional needs, as well as constraints facing the arrangements. The survey consists of pictures and short descriptions of various tree arrangements previously explained in literature analysis. The final outcome of this study is a detailed data set of tree arrangement preferences from Jakarta citizens, expressed in percentages of how much respondents liked and disliked each tree arrangement. Nine tree arrangement patterns are devised, with preference percentages ranging from 28.3% to 81.13% for each pattern. The most preferred patterns are lined big trees on two sides of roads (81.13%), lined big trees per three grave rows (73.58%), and lined big trees on one side of roads (64.15%). Meanwhile, the least preferred patterns are small trees as gravestone replacements, and small trees on the middle of graves (both 28.3%), followed by small trees on the opposing side of gravestones (32.08%).
... Participants were presented with 10 factors related to logis-tical factors (proximity to your home/work, proximity of therapist's office to the doctor's office, ease of parking, insurance coverage, appointment availability, affiliated with a well-known medical group), therapist qualifications (years of experience, Certified Hand Therapist), and referral sources (online reviews, strong doctor referral) ( Table 1). Participants were required to rank each of these factors on a 5-point Likert scale based on importance to encourage deeper processing of response options (Table 2) [12]. In addition to rating the level of importance, participants were asked to select the most important factor when choosing a hand therapist ( Figure 1). ...
... For grid series, the core question-the wording that is unique to each item in the series-contributes all its words to the count. However, because checklists involve a simpler task and, according to Smyth et al. (2006) and Callegaro et al. (2015), are about twice as fast to complete, their word count was halved to compensate. ...
... • Participants 3. Use forced-choice questions by either asking for the best answer or Yes/No (see Figure 4) instead of check-all-thatapply (Smyth et al. 2006). ...
Article
This 5-page fact sheet presents an overview of constructing closed-ended items for a questionnaire. It is part of the Savvy Survey Series. Written by Jessica L. Gouldthorpe and Glenn D. Israel, and published by the UF Department of Agricultural Education and Communication, April 2014. AEC398/PD068: The Savvy Survey #6c: Constructing Closed-Ended Items for a Questionnaire (ufl.edu)
Chapter
To understand the whole experience evoked by food stimuli, qualitative approaches can be used, including introspection and interview. Here, we present a protocol aimed at collecting the emotional experience provoked by odors using a semi-guided interview. The subjects’ verbalizations can then be transcribed and organized with dedicated software to draw a portrait of food-related emotions for the olfactory modality, but also for other senses such as vision or taste by adapting the protocol. Altogether, these subjective data are a complement to standard ratings or choice within a list of emotions, and they can help better understand the variety and the context surrounding the affective responses to food.Key wordsSubjectivityQualitative methodsOlfactionDiscourse analysis
Chapter
Check-all-that-apply (CATA) questions have become the most common tool for product sensory characterization with consumers and the question format is increasingly being used in emotion questionnaires. This is because CATA questions are simple for research participants, deliver reproducible results, and reliably elicit emotional associations to products and other food-related stimuli. This protocol explains how to implement questionnaires with emotion words as the CATA terms and how to analyze the generated data. Drawing on the more extensive literature on CATA questions in sensory product characterizations with consumers, methodological issues related to how CATA question implementation can influence the results are also covered.Key wordsCheck-all-that-apply questionsCATAEmotionsConsumer researchQuestionnaires
Article
As the public, policymakers, and scholars increasingly call for police reform, one commonly proposed solution is to increase the number of female officers and leaders under the assumption that female police will be perceived as more trustworthy, less violent, and more effective at addressing gendered crimes. Using a survey experiment, we explore whether there is a link between passive representation in police leadership and civilians’ perceptions of substantive representation by the police. We argue that due to feminine stereotypes and role congruity theory, female police chiefs should be perceived as more effective at addressing gendered crimes, corruption, police brutality, and community relations, but be evaluated as less competent on addressing violent crimes. We find that female police chiefs are considered to be more competent at handling gendered crimes (with little relationship with non-gendered crimes), and are viewed as more able to address corruption, police brutality, and community relations. Female police chiefs are also more likely to receive higher levels of overall support. We emphasize that our study points to the importance of passive representation within police leadership, but caution that increasing women’s representation may be a necessary, but not sufficient condition to improve relations between the public and the police.
Article
This paper questions whether manufacturers can utilize visual packaging cues, in particular colours and shapes, to communicate the intrinsic attributes of cheeses. While the existence of crossmodal correspondences between packaging shapes and tastes have been demonstrated in previous food studies, we still need knowledge about how the interaction of colour and shape of the packaging that the cheese is sold influence customers’ expectations of taste and liking. Throughout two studies, we illustrate that specific shapes and colours communicate certain cheese tastes. In study 1, we found that, while a mild tasting cheese is associated with round shapes, high colour brightness and low colour saturation, a sharp tasting cheese is associated with an angular shape, lower level of colour brightness and higher level of colour saturation. This knowledge can be utilized to communicate taste via the design of the packaging. In study 2, we moved on to test this via packaging. We found a round shaped packaging combined with high colour brightness and low colour saturation communicates a mild taste, whereas a triangular shape packaging combined with a low colour brightness/ high saturation signals a sharper tasting cheese. Moreover, a round packaging shape elicits the highest degree of liking. Our findings demonstrate that multiple sensory elements of a product’s packaging can enhance respondents’ taste expectations and expected liking of a product. In conclusion, this paper offers guidance to managers seeking to design packaging that communicates the flavour of food products, specifically for cheeses.
Article
Full-text available
Previous research has documented effects of the order in which response choices are offered to respondents using closed-ended survey items, but no theory of the psychological sources of these effects has yet been proposed. This paper offers such a theory drawn from a variety of psychological research. Using data from a split-ballot experiment in the 1984 General Social Survey involving a variant of Kohn's parental values measure, we test some predictions made by the theory about what kind of response order effect would be expected (a primacy effect) and among which respondents it should be strongest (those low in cognitive sophistication). These predictions are confirmed. We also test the “form-resistant correlation” hypothesis. Although correlations between items are altered by changes m response order, the presence and nature of the latent value dimension underlying these responses is essentially unaffected.
Article
1. Introduction. 2. Methods for Determining Cognitive Processes and Questionnaire Problems. 3. Answering a Survey Question: Cognitive and Communicative Processes. 4. Psychological Sources of Context Effects in Survey Measurement. 5. The Direction of Context Effects: What Determines Assimilation or Contrast in Attitude Measurement. 6. Order Effects Within a Question: Presenting Categorical Response Alternatives. 7. Autobiographical Memory. 8. Event Dating. 9. Counting and Estimation. 10. Proxy Reporting. 11. Implications for Questionnaire Design and the Conceptualization of the Survey Interview.
Article
This paper introduces client side paradata in the field of websurveys. Client side paradata result from capturing meaningful respondent actions at the level of individual survey questions. Survey researchers can use these data to answer a multitude of (methodological) research questions, particularly questions which need a detailed description of response behavior. This paper not only discusses client side paradata in general terms, but also presents the necessary software to enable websurveys of third parties to collect client side paradata. Furthermore, some examples demonstrating the possibilities of client side paradata are presented. It is shown how these data allow to detect changing responses to questions, the sequence in which items are answered, and the time needed to answer questions (in milliseconds). The intention of this paper is to motivate survey researchers to include client side paradata in their own websurveys, so that a more general methodology and deontology on the use of client side paradata can be developed.
Article
This paper proposes that when optimally answering a survey question would require substantial cognitive effort, some repondents simply provide a satisfactory answer instead. This behaviour, called satisficing, can take the form of either (1) incomplete or biased information retrieval and/or information integration, or (2) no information retrieval or integration at all. Satisficing may lead respondents to employ a variety of response strategies, including choosing the first response alternative that seems to constitute a reasonable answer, agreeing with an assertion made by a question, endorsing the status quo instead of endorsing social change, failing to differentiate among a set of diverse objects in ratings, saying ‘don't know’ instead of reporting an opinion, and randomly choosing among the response alternatives offered. This paper specifies a wide range of factors that are likely to encourage satisficing, and reviews relevant evidence evaluating these speculations. Many useful directions for future research are suggested.
Article
An experiment was conducted to assess the effect of using “mark all that apply” question instructions on survey reporting as part of the field test for the Second Follow-up of the National Education Longitudinal Study of 1988 Eighth Graders (NELS:88). Mark-all-that-apply instructions were compared with instructions asking respondents to indicate “yes” or “no” to each response option on responses to three items dispersed throughout the questionnaire and consisting of different topics and numbers of response options. For the three items, significantly fewer response options were selected with the mark-all-that-apply instructions than with the yes/no instructions, but because external validity criteria were not available, overreporting to the yes/no instructions cannot be ruled out. Instructiondependent primacy effects, predicted under the hypothesis that respondents would engage in more superficial processing when given the mark-all-that-apply instructions, were not found.
Article
For the first time in decades, conventional wisdom about survey methodology is being challenged on many fronts. The insights gained can not only help psychologists do their research better but also provide useful insights into the basics of social interaction and cognition. This chapter reviews some of the many recent advances in the literature, including the following: New findings challenge a long-standing prejudice against studies with low response rates; innovative techniques for pretesting questionnaires offer opportunities for improving measurement validity; surprising effects of the verbal labels put on rating scale points have been identified, suggesting optimal approaches to scale labeling; respondents interpret questions on the basis of the norms of everyday conversation, so violations of those conventions introduce error; some measurement error thought to have been attributable to social desirability response bias now appears to be due to other factors instead, thus encouraging different approaches to fixing such problems; and a new theory of satisficing in questionnaire responding offers parsimonious explanations for a range of response patterns long recognized by psychologists and survey researchers but previously not well understood.
Comparing Check-All and Forced-Choice Question Formats in Web Surveys: The Role of Satisficing, Depth of Processing, and Acquiescence in Explaining Differences
  • Jolene D Smyth
  • A Don
  • Leah Melani Dillman
  • Michael J Christian
  • Stern
Smyth, Jolene D., Don A. Dillman, Leah Melani Christian, and Michael J. Stern. 2005. " Comparing Check-All and Forced-Choice Question Formats in Web Surveys: The Role of Satisficing, Depth of Processing, and Acquiescence in Explaining Differences. " Social and Economic Sciences Research Center Technical Report #05-029, Washington State University. Available online at http://survey.sesrc.wsu.edu/dillman/papers.htm (accessed December 31, 2005).
Internet Data Collection
  • Samuel J Best
  • Brian Krueger
Best, Samuel J., and Brian Krueger. 2004. Internet Data Collection. Thousand Oaks, CA: Sage.