ArticlePDF Available

Abstract and Figures

The role of effort and accuracy in the adaptive use of decision processes is examined. A computer simulation using the concept of elementary information processes identified heuristic choice strategies that approximate the accuracy of normative procedures while saving substantial effort. However, no single heuristic did well across all task and context conditions. Of particular interest was the finding that under time constraints, several heuristics were more accurate than a truncated normative procedure. Using a process-tracing technique that monitors information acquisition behaviors, two experiments tested how closely the efficient processing patterns for a given decision problem identified by the simulation correspond to the actual processing behavior exhibited by subjects. People appear highly adaptive in responding to changes in the structure of the available alternatives and to the presence of time pressure. In general, actual behavior corresponded to the general patterns of efficient processing identified by the simulation. Finally, learning of effort and accuracy trade-offs are discussed.
Content may be subject to copyright.
Journal of Experimental Psychology:
Learning, Memory, and Cognition
1988,
Vol. 14, No. 3, 534-552
Copyright 1988 by the American Psychological Association, Inc.
O278-7393/88/SOO.75
Adaptive Strategy Selection in Decision Making
John W. Payne and James R. Bettman
Center for Decision Studies, Fuqua School of
Business,
Duke University
Eric J. Johnson
Wharton School, University of Pennsylvania
The role of effort and accuracy in the adaptive use of decision processes
is
examined. A computer
simulation using the concept of elementary information processes identified heuristic choice
strategies that approximate the accuracy of normative procedures while saving substantial effort.
However, no single heuristic did well across all task and context conditions. Of particular interest
was
the
finding
that under time constraints, several heuristics
were
more accurate than a truncated
normative procedure. Using a process-tracing technique that monitors information acquisition
behaviors, two experiments tested how closely the efficient processing patterns for a given
decision problem identified by the simulation correspond to the actual processing behavior
exhibited by subjects. People appear highly adaptive in responding to changes in the structure of
the available alternatives and to the presence of time pressure. In general, actual behavior
corresponded to the general patterns of efficient processing identified by the simulation. Finally,
learning of effort and accuracy trade-offs are discussed.
A major empirical finding of recent decision research is
that individuals use a variety of choice strategies (Abelson &
Levi, 1985). Sometimes a person will use a compensatory
strategy that processes all relevant information and trades off
the good and bad aspects of each alternative. At other times,
the same person might use a noncompensatory decision strat-
egy, which avoids trade-offs among values and typically re-
duces information processing demands by ignoring poten-
tially relevant problem information. For example, the lexi-
cographic strategy simply selects the alternative that is best
on the most important attribute if there are no ties. The use
of a particular decision strategy is contingent on many task
and context variables (Payne, 1982), such as the number of
alternatives.
Evidence of contingent information processing in decisions
raises an important question: Why are certain decision strat-
egies applied to certain decision problems? One general per-
spective looks at strategy selection as a function of both costs,
primarily the effort required to use a rule, and benefits,
primarily the ability of
a
strategy to select the best alternative
(Beach & Mitchell, 1978; Russo & Dosher, 1983). A cost-
benefit approach to strategy selection maintains the concept
of calculated rationality (March, 1978) by including the costs
of executing the decision process in the assessment of ration-
ality. Furthermore, because the costs and benefits of various
decision strategies vary across different problems, the cost-
benefit perspective provides the potential for explaining why
decision strategies vary across situations.
This article examines the adaptive selection of
choice
strat-
egies and is structured as follows: First, a framework for
measuring both the cognitive effort and accuracy of different
strategies in various decision environments
is
presented. That
framework decomposes choice strategies into a common set
of more elementary information processes (EIPs). Next, a
Monte-Carlo simulation of the effort and accuracy of choice
strategies in a variety of choice environments is reported and
the impact of time constraints on the relative accuracy of
decision strategies is examined. The simulation identifies
general patterns of adaptivity in processing that might be
expected if
the
effort and accuracy framework is correct and
is used to hypothesize patterns of context and task effects for
a particular decision environment. Two experimental studies
are then reported that examine the correspondence between
the patterns identified by the simulation and actual behavior.
Effort and Accuracy in Choice
One major difficulty in using a cost-benefit perspective to
examine strategy selection has been the lack of an easily
calculated and conceptually appropriate measure of effort. A
second area of concern has been the lack of agreement on
how to measure choice accuracy.
The research reported in this article was supported by a contract
from the Engineering Psychology Program of the Office of Naval
Research.
The order of authorship is arbitrary; each author contributed
equally to all phases of this project.
Correspondence concerning this article should be addressed to
John W. Payne, Center for Decision Studies, Fuqua School of Busi-
ness,
Duke University, Durham, North Carolina 27706.
Measuring Strategy Effort
Building on ideas of Newell and Simon (1972), Johnson
and Payne (1985) suggested that decision strategies can be
decomposed into EIPs. A decision strategy can then be seen
as a sequence of events, such as reading the values of two
alternatives on an attribute, comparing them, and so forth.
534
ADAPTIVE DECISIONS535
One set of EIPs for decision making follows: (a) read an
alternative's value on an attribute into short-term memory
(STM),
(b) compare two alternatives on an attribute, (c) add
the values of two attributes in STM, (d) calculate the size of
the
difference
of two alternatives for an attribute, (e) weight
one value by another (product), (f) eliminate an alternative
from consideration, (g) move to next element of the external
environment, and, (h) choose the preferred alternative and
end the process. Such EIPs provide a common language for
describing seemingly diverse decision strategies in terms of
their underlying components. This is important if strategy
selection is to be investigated at an information processing
level rather than at a more general level of
analysis,
such as
analytic versus nonanalytic (Beach & Mitchell, 1978) or ana-
lytic versus intuitive (Hammond, 1986). The EIPs can also
be used as components in production system models of
deci-
sion strategies (see Johnson & Payne, 1985, for an example).
Productions are
condition —* action
pairs, where the action is
performed only if the condition is matched. The EIPs could
be used as the actions, and the results of earlier actions could
be used as parts of conditions
(e.g.,
if A and
B
have been read,
then add A and B).
A particular set of EIPs represents a theoretical judgment
regarding the appropriate level of decomposition for decision
processes. For instance, the product operator might itself be
decomposed into more elementary
processes.
We hypothesize,
however, that a reasonable approximation of the cognitive
effort associated with a strategy may be obtained from the
foregoing level of decomposition.
A
count of the total number of EIPs used by a given strategy
to reach a decision in a particular choice environment pro-
vides a measure of the effort associated with the use of that
decision strategy in that environment
(O.
Huber,
1980;
John-
son, 1979). A number of studies of cognition use EIP counts
to measure processing load (e.g., Card, Moran, & Newell,
1983).
A study that directly relates EIP counts to measures of
decision effort is described in Bettman, Johnson, and Payne
(1987).
Measuring
Accuracy
Accuracy of choice can be defined in many ways. Quality
of choice can be defined by basic principles of coherence such
as not selecting dominated alternatives or not displaying
intransitive patterns of preferences. Note that violations of
dominance can be defined in terms of a single
choice,
whereas
violations of transitivity are defined over
several
choices.
More
specific criteria for decision quality can be developed in
certain types of choice environments. For instance, the ex-
pected utility (EU) model is often suggested as a normative
decision procedure for risky choice because it can be derived
from more basic principles. A special case of the EU model,
the maximization of expected value (EV), has been used as a
criterion to investigate the accuracy of decision heuristics via
computer simulation (Thorngate, 1980; Johnson & Payne,
1985).
The main advantage of
EV
as an accuracy measure is
that utility values from individual decision makers are not
required to operationalize the rule. A similar model, the
compensatory weighted additive rule, is often used as a crite-
rion for decision effectiveness in multiattribute choice (Zakay
& Wooler, 1984).
A Monte-Carlo Simulation Study of Effort and
Accuracy in Choice
This study provides predictions about the patterns of
proc-
essing that would be exhibited in various task environments
by an idealized adaptive decision maker attending to both
effort and accuracy in selecting a decision strategy. The sim-
ulation was used to generate hypotheses about the types of
processing that might occur in the experiments described
below if decision makers adapt to different task environments
as predicted by the proposed framework.
The simulation extends prior work reported in Johnson
and Payne's (1985) article. In particular, the present study
investigates environments with time constraints, potentially
one of the most significant task variables. Under time con-
straints, heuristics might be even more accurate than a "nor-
mative" strategy such as maximization of expected value,
because the heuristic's accuracy may degrade under increasing
time pressure at a slower rate than a more comprehensive
processing rule (e.g., EV) degrades. One reason for this is that
heuristics require fewer operations and will generally be "fur-
ther along" when time runs out. Furthermore, people may
use heuristics under time pressure because they have no other
choice (Simon, 1981). A more normative decision strategy
like expected utility maximization may exceed the informa-
tion processing capabilities of a decision maker, given any
"reasonable" time limit. Deciding how to choose then be-
comes a selection of the "best" of the available heuristics, not
a choice between using some heuristic or the more normative
rule.
Choice
Environment and
Processing Characteristics
The decision task used in the simulation study and in the
empirical studies was a special type of risky choice, with
alternatives with outcomes that have different payoffs but the
same probability for each alternative. In other
words,
each of
the alternatives may have a different value for a given out-
come, but the probability of receiving that outcome is the
same for all the alternatives. This allows the decision task to
be interpreted as either a riskless choice or as a form of risky
choice (Keeney
&
Raiffa,
1976).
In the
riskless
interpretations,
the probabilities function
as
attribute weights that apply across
alternatives. One can look at a probability of
.20,
for example,
as the weight given to a particular attribute across all alter-
natives. Note that a statement about structural similarity is
all that is being claimed; the empirical work described later
used the risky choice interpretation.
In solving risky choice problems, the decision maker must
search among probabilities and the values associated with the
outcomes for each alternative. Different decision strategies
can be thought of as different rules for conducting that search
and vary in a number of
aspects
(see Bettman, 1979). One of
the most important distinctions among rules is the extent of
compensatory as compared to noncompensatory processing.
536J. PAYNE, J. BETTMAN, AND E. JOHNSON
A related aspect is the degree to which the amount of proc-
essing is consistent (or selective) across alternatives or attri-
butes.
That is, is the same amount of information examined
for each alternative or attribute, or does the amount vary? In
general, it has been assumed that more consistent processing
across alternatives is indicative of a more compensatory de-
cision strategy (Payne, 1976). Consistent processing some-
times involves examination of all information for every alter-
native and attribute. A more variable (selective) processing
pattern, on the other hand, is seen as indicating a strategy of
eliminating alternatives on the basis of only a partial process-
ing of information, without considering whether additional
information might compensate for a poor value.
Another general processing characteristic is the total
amount of processing carried out. Whether processing is
consistent or not, the total amount of information examined
can vary, from quite cursory to exhaustive.
A final aspect of processing concerns whether the search
and evaluation of alternatives proceeds across or within attri-
butes or dimensions. The former is often called wholistic or
alternative-based
processing and the latter dimensional or
attribute-based processing. In alternative-based processing,
multiple attributes of a single alternative are considered before
information about a second alternative is processed. In con-
trast, in attribute-based processing, the values of
several
alter-
natives on a single attribute are processed before information
about a second attribute is processed. Russo and Dosher
(1983) suggest that attribute-based processing is cognitively
easier.
The next section provides additional detail on the specific
strategies used in the simulation. Following
these
descriptions,
we provide examples of how the specific strategies exemplify
the above distinctions.
Decision Strategies Examined
The simulation investigated 10 decision strategies. The 10
strategies were selected because they vary substantially in the
amount of information used and in the way that available
information is used to make a choice.
The most information intensive strategy examined was a
version of a weighted
additive
(WADD) compensatory process,
which can be thought of as a version of expected value
maximization. The strategy considers the values of each alter-
native on all of the relevant attributes (outcomes) and all of
the relative importances (weights or probabilities) of the dif-
ferent attributes (outcomes) to the decision maker. The rule
develops a weighted value for each attribute by multiplying
the weight (probability) by the value and sums over all attri-
butes to arrive at an overall evaluation of an alternative. The
rule selects the alternative with the highest evaluation. The
random
(RAN)
choice rule, in contrast, chooses an alternative
at random with no search of the available information, pro-
viding a minimum baseline for measuring both accuracy and
effort.
In addition to these two baseline rules, six choice heuristics
and two combination strategies were implemented. The
equal
weight
(EQW)
rule examines all alternatives and all attribute
values for each alternative. However, the rule ignores infor-
mation about the relative importance (probability) of each
attribute. In some contexts, the equal weight rule has been
advocated as a highly accurate simplification of the risky
choice process (Thorngate, 1980). Elimination by aspects
(EBA)
(Tversky, 1972) begins by determining the most impor-
tant attribute (the outcome with the highest weight [probabil-
ity]).
Then, the cutoff
value
for that attribute is retrieved, and
all alternatives with values for that attribute below the cutoff
are eliminated. The process continues with the second most
important attribute, then the third, and so on, until one
alternative remains.
The majority of confirming dimensions
(MCD)
rule (Russo
&
Dosher, 1983) involves processing pairs of alternatives. The
values for each of the two alternatives are compared on each
attribute, and the alternative with a majority of winning
(better) attribute values is selected. In the case of an equal
number of winning values for the two alternatives, our version
of this rule retained the alternative winning the comparison
on the last attribute. The retained alternative
is
then compared
to the next alternative among the set of alternatives. The
process of pair-wise comparison repeats until all alternatives
have been evaluated and the final winning alternative identi-
fied. The
satis/icing
(SAT)
rule (Simon, 1955) considers alter-
natives one at a time, in the order they occur in the set. Each
attribute of an alternative is compared to a cutoff value. If
any attribute value is below the cutoff
value,
that alternative
is rejected. The first alternative which passes the cutoffs for
all attributes is chosen, so a choice can be made before all
alternatives have been evaluated. In the case where no alter-
native passes all the cutoffs, a random choice is made.
Two versions of the lexicographic choice rule were imple-
mented. For the strict lexicographic
(LEX)
rule, the most
important attribute is determined, the values of
all
the alter-
natives on that attribute are examined, and the alternative
with the best value on that attribute is selected. If there are
ties,
the second most important attribute is examined, and so
on, until the tie is broken. Because the simulation generates
attributes as continuous random variates, ties almost never
occur. A
lexicographic
semi-order (LEXSEMI) rule (Tversky,
1969) was also examined. This rule is similar to the strict
lexicographic rule, but introduces the notion of a just-notice-
able
difference
(JND). If several alternatives are within a JND
of the best alternative on the most important attribute, they
are considered to be tied. The potential advantage of the
LEXSEMI
rule is that it ensures that an option that
is
marginally
better on the most important attribute but much worse on
other attributes will not necessarily be selected.
Finally, two combined strategies were implemented. The
first was an
elimination-by-aspects
plus weighted additive
(EBA+WADD) rule. This rule used an EBA process until the
number of available alternatives remaining
was
three or fewer,
and then used a weighted additive rule to select among the
remaining alternatives. The other combined strategy, elimi-
nation-by-aspects plus majority of confirming dimensions
(EBA+MCD), used an elimination-by-aspects process to reduce
the problem size, and then used a majority of confirming
dimensions heuristic to select from the reduced set. These
combinations were used because they had been observed in
ADAPTIVE DECISIONS537
several previous choice process studies (e.g., Bettman & Park,
1980).
As noted earlier, these choice strategies differ on a number
of
aspects,
such as the degree to which the amount of proc-
essing
is
consistent or variable across attributes or alternatives,
the pattern of processing (alternative based or attribute based),
and the total amount of processing. The various strategies
represent different combinations of these aspects. The
weighted adding strategy
uses
consistent and alternative-based
processing and examines all available information. The equal
weight strategy uses consistent and alternative-based process-
ing but uses a subset of the available information. The
MCD
rule is consistent, attribute-based, and ignores weight infor-
mation. The
EBA
rule implies a variable (selective) pattern of
processing that is attribute based. The total amount of infor-
mation processed by
EBA
depends on the particular values of
the alternatives and cutoffs. The lexicographic strategies are
also selective and attribute based, and the satisficing strategy
is selective and alternative based. The total amount of infor-
mation processed
is also
contingent upon the particular values
of the alternatives for these strategies.
The simulation provides insights into how aspects of proc-
essing, as exemplified by individual strategies, might change
across different choice environments if adaptivity is exhibited.
Other aspects of processing, such as the proportion of proc-
essing devoted to the probabilities and the proportion of
processing devoted to the most probable (important) attribute,
will also be considered.
Task and Context Variables
Three task variables were examined. The number of alter-
natives and number of attributes were each varied at three
levels (2, 5, and 8) in order to manipulate task complexity.
The third task variable included was time pressure, varied at
four levels. One level involved no time pressure, with each
rule using as many operations as needed. The three other
levels of time constraint were a maximum of (a) 50 EIPs
(severe time pressure), (b) 100 EIPs (moderate pressure), and
(c) 150 EIPs (low
pressure).
These time
(EIP)
constraint values
were selected on the basis of an analysis of the maximum
number of EIPs associated with the most effortful rule
(weighted additive).' Note that the total number of
EIPs
was
used to operationalize time pressure. This implicitly assumes
that each EIP takes a similar amount of
time.
The sensitivity
of the analyses to this assumption is examined later.
A key issue in dealing with the time constraints
is
how rules
should select among alternatives if they run out of time.
Several rules identify one alternative as the best seen so far
(i.e.,
the WADD, EQW, and MCD rules) and select that alterna-
tive when they run out of
time.
The
EBA,
LEX,
and
SAT
rules
all pick an option randomly from those alternatives not yet
eliminated. Because the
EBA
and lexicographic rules were able
to process all alternatives on at least one attribute, even for
the largest problem size under the most severe time constraint,
the choice came from the set already processed but not
eliminated. For the
SAT
rule in this most severe case, if the
first alternative was not acceptable, then random choice
among the remaining alternatives seemed to be the most
reasonable option. For the two combined strategies, the selec-
tion was either made at random from the alternatives not yet
eliminated, if the combined strategy was still in the
EBA
phase,
or the best so far, if in the WADD or MCD phase.
Finally, two context variables were included. Context vari-
ables,
unlike task variables, are associated with the particular
choice object values (Payne, 1982). One context variable was
the presence or absence of dominated alternatives. Removing
dominated alternatives produces efficient choice sets. Mc-
Clelland (1978) suggested that the success of the equal weight-
ing simplification strategy is dependent on the presence of
dominated alternatives. Empirical evidence showing that
dominated alternatives can impact choice was provided by J.
Huber, Payne, and Puto (1982). This implies that dominated
alternatives are not simply disregarded.
The second context variable was the degree of dispersion
of probabilities within each gamble. To illustrate, a four-
outcome gamble with a low degree of dispersion might have
probabilities of .30, .20, .22, and .28 for the four outcomes,
respectively. On the other hand, a gamble with a high degree
of dispersion might have probabilities such as .68, .12, .05,
and .15 for the four outcomes. This variable was chosen
because Thorngate (1980) had suggested that probability in-
formation may be relatively unimportant in making accurate
risky choices (see also Beach, 1983). Obviously, if all of the
outcome probabilities were identical, probability information
would not matter. On the other hand, if one outcome is
certain, then examining the probability information to find
that outcome is crucial. What is unclear is how sensitive
heuristics are to the dispersion in probabilities, and how
adaptive actual behavior is to such a context variable. We
therefore examined decision sets with either low or high
dispersion.
JNDs and Cutoff Values
Three of
the
rules, EBA, SAT, and LEXSEMI, involve param-
eters that affect the potential effort and accuracy of the rules.
For
EBA
and
SAT
this is the cutoff value used to eliminate
alternatives. For the LEXSEMI rule, it is the value of the JND.
Although these parameters are, in some sense, under the
control of the decision maker for each decision, we wanted to
establish a priori values that would be the same for all deci-
sions made by the simulation. A pilot simulation without any
time constraints was run to identify the best levels, with all
attributes in the simulation drawn from a uniform distribu-
tion bounded by 0 and
1,000.
We manipulated both cutoffs
(100, 300, and 500) and
JNDS
(1, 50, and 100) and
selected
values that represented the most efficient accuracy-effort
trade-offs averaged across the entire set of decisions. We found
that values of the cutoff of 500 and 300 were most efficient
for
EBA
and
SAT,
respectively, and that a
JND
of 50 gave the
best performance for the LEXSEMI rule. The results presented
1 To provide insight into the ranges of values possible, the average
number of EIPs required for the weighted additive rule to run to
completion ranged from 28 for the two-alternative, two-attribute case
to 400 for the eight-alternative, eight-attribute case. Comparable
figures for the lexicographic strategy are 21.3 (2 x 2) and 172.5 (8 x
8).
538J. PAYNE, J. BETTMAN, AND E. JOHNSON
for the EBA, SAT, and LEXSEMI rules are for the most efficient
values for each rule.
Method
Each of the 10 decision rules was applied to 200 randomly gener-
ated decision problems in each of the 288 conditions defined by a 3
(number of alternatives) by 3 (number of attributes) by 2 (low or high
dispersion of probabilities or weights) by 2 (presence or absence of
dominated alternatives) by 2 (cutoff values) by 4 (time constraints)
factorial. After each trial, the alternative selected was recorded, along
with a tally for each elementary operation used by the decision rule.
Results
Effort was measured using EIPs, and accuracy was meas-
ured using EV (WADD) maximization. Specifically, effort was
measured by the total count of the EIPs used by a specific
decision rule to make a selection from a particular set of
alternatives. This measure assumes that each
EIP
requires the
same level of time or mental effort. Later, we report results
that relax that assumption.
The EV-based measure of accuracy compared the relative
performance of strategies to the two baseline strategies: (a) the
maximization of expected value (WADD), and (b) random
choice. The measure was defined by the following equation:
relative _ EVheuristic rule choice - EVrandom rule choice
EVexpected value choice - EVrandom rule choice
The maximum expected value possible in a particular choice
set and the expected value associated with a random selection
were determined. The expected value of the alternative se-
lected by a decision heuristic was then compared to these two
baseline values. This measure is bounded by a value of 1.00
for the
EV
rule, and an expected value of 0.0 for random
selection. It provides a measure of the relative improvement
of a heuristic strategy over random choice. Although this
measure of accuracy may seem somewhat arbitrary, the results
are not sensitive to the use of alternative criteria (see Johnson
& Payne, 1985, for a discussion of other accuracy measures).
Table
1
presents the relative accuracy and effort scores for
each of the 10 decision strategies in each of the four cells
defined by crossing the context factors dispersion in weights
(low, high) and dominance (present or absent). These scores
are for the no-time-pressure conditions. The results are aver-
aged over number of alternatives and number of attributes.
Note that the aspects characterizing each strategy are also
included to aid interpretation of
the
results.
Table 1
Simulation Results for
Accuracy
and
Effort
of Heuristics in the
No-Time-Pressure Decision Problems
Strategy
WADD
RA
UOC
EQW
RA
UOC
SAT
RA
UOC
MCD
RA
UOC
LEX
RA
UOC
LEXSEMI
RA
UOC
EBA
RA
UOC
EBA+WADD
RA
UOC
EBA+MCD
RA
UOC
Processing
form
Alternative
Alternative
Alternative
Attribute
Attribute
Attribute
Attribute
Mixed
Attribute
Processing
selectivity
No
No
Yes
No
Yes
Yes
Yes
Yes
Yes
Task
Dominance possible
Low
dispersion
1.0
160
.89
85
.32
49
.62
148
.69
60
.71
87
.67
87
.84
104
.69
89
High
dispersion
1.0
160
.67
85
.31
49
.48
148
.90
60
.87
78
.66
88
.79
106
.59
89
environment
Dominance
Low
dispersion
1.0
160
.41
85
.03
61
.07
141
.67
60
.64
79
.54
82
.69
102
.29
86
not possible
High
dispersion
1.0
160
.27
85
.07
61
.09
140
.90
60
.77
81
.56
82
.66
102
.31
86
Note. RA = relative accuracy (95% confidence interval width = ±.029). UOC = unweighted operations count
(95%
confidence interval width
strategy,
LEX
= lexicographic strategy,
LEXSEMI
= lexicographic semi-order strategy,
EBA
= elimination by aspects strategy,
EBA+WADD
=
combined elimination by aspects plus weighted additive strategy,
EBA+MCD
= combined elimination by aspects plus majority of confirming
combined elimination by aspects plus weighted additive strategy,
EBA+MCD
= combined elimination by aspects plus majority of confirming
dimensions strategy.
ADAPTIVE DECISIONS539
No-time-pressure results. The simulation results indicate
that in some environments, heuristics can approximate the
accuracy of a normative strategy (WADD), with substantial
savings in effort. A decision maker using an
EQW
model, for
example, can achieve 89% of the relative performance of the
normative model, with only about half the effort, in the low-
dispersion, dominance-possible task environment. Even more
impressive is the performance of the strict lexicographic rule
in the high-dispersion task environments. The lexicographic
rule achieves 90% relative accuracy, with only about 40% of
the effort. Note that the performance of the lexicographic-
semiorder rule exceeds that of the simpler lexicographic rule
in only one of the four decision environments. The extra
effort needed to use JNDS may only be of value in a limited
set of situations.
It is clear from Table 1 that the most efficient heuristic
varies across decision environments. In the low-dispersion,
dominance-possible environment, for
example,
the processing
simplification of ignoring probability (weight) information,
that is, the equal weight strategy, appears quite accurate. In
contrast, when the dispersion in probabilities is higher, the
lexicographic rule, which ignores all the payoff information
except that associated with the single most likely outcome, is
the most accurate heuristic, and is substantially better than
the equal weight rule. It is also clear that some heuristics (e.g.,
MCD
and
SAT)
perform reasonably in the dominance-possible
environments, but are very poor performers when all domi-
nated alternatives have been removed. Note also that in the
low-dispersion, dominance-absent environment, the best sim-
ple heuristic,
LEX,
has an accuracy score of
.67.
That accuracy
score is .22
less
than the accuracy score for the "best" heuristic
in the other three environments. This suggests that a decision
maker in such an environment would not be able to reduce
effort much without suffering a substantial loss in accuracy.
Decision problems involving low dispersion, dominance-ab-
sent environments may therefore be particularly difficult.
In summary, heuristic strategies can be highly accurate in
some environments, but no single heuristic does well across
all contexts. This suggests that if a decision maker wanted to
achieve both a reasonably high level of accuracy and low
effort, he or she would have to use a repertoire of strategies,
with selection contingent upon situational demands.
An interesting set of results from Table 1 concerns the
performance of the two combined decision strategies. The
combination of an elimination process with a weighted adding
model (EBA+WADD) performed well across all task conditions.
That rule offers a good combination of expected accuracy and
reasonable levels of expected effort. The EBA+MCD rule, on
the other hand, seems to be an inefficient combination strat-
egy-
Although it is not shown in Table 1, there were systematic
effects of the task variables number of alternatives and num-
ber of attributes. For example, the mean accuracy of the equal
weight rule decreased only from .93 to .87 as the number of
attributes increased from two to eight in the low-dispersion,
dominance-possible environment. However, the mean accu-
racy of the
LEX
rule did decrease substantially, from .86 to
.55,
as the number of attributes was increased. The decrease
in accuracy for the lexicographic rule reflects the fact that a
rule that uses only information associated with a single (al-
though the most probable) outcome would be expected to
perform worse as an increasing number of relatively impor-
tant (probable) outcomes are ignored. In contrast, the impact
of increases in number of attributes on the
EQW
and
LEX
rules
was reversed for the high-dispersion, dominance-possible en-
vironments. The mean accuracy for the
EQW
rule decreased
from .71 to .49 for the two outcome and eight outcome
problems, respectively, reflecting the fact that the rule essen-
tially overweights information from more and more outcomes
with small probabilities as the number of outcomes is in-
creased. The
LEX
rule decreased only from .93 to .87 for the
same problems.
The number of alternatives also had effects on effort. For
instance, an increase in the number of alternatives from two
to eight increased the average EIP count of the weighted
adding rule by 191 EIPs. The
EBA
strategy, on the other hand,
only increased by 79 EIPs as the number of alternatives went
from two to eight. More generally, the effort required to use
heuristics increased more slowly than the effort required to
use more normative procedures as the number of alternatives
was increased. This simulation result is compatible with prior
empirical work showing shifts in strategies due to number of
alternatives (Payne, 1982).
The potential trade-offs between accuracy and effort for the
different strategies are highlighted by Figure 1, which shows
the results from the low-dispersion in probabilities (weights),
dominance possible and high-dispersion, dominance-possible
contexts averaged over number of alternatives and outcomes.
Only the dominance-possible context is shown because it is
used in the experimental work described later. The measure
of effort for each strategy has been turned into a relative
measure based on the ratio of the number of EIPs required
by a heuristic to the number of EIPs required by the most
effortful WADD strategy. A line that indicates an efficient
frontier of strategies, considering both a desire for greater
accuracy and a desire for lesser effort, is drawn for each
context. Figure 1 makes clear both the existence of efficient
heuristics and the fact that the accuracy/effort trade-offs for
various strategies differ across relatively subtle changes in
context, such as the dispersion in probabilities.
The simulation does not identify which particular strategy
a decision maker will necessarily select in a given decision
environment. That would depend on the degree to which a
decision maker was willing to trade decreases in accuracy for
effort savings. However, note that if a decision maker desired
relatively high levels of accuracy in an environment where
dominance is possible, there are accurate strategies in each
environment with substantial savings in effort: the
LEX
rule
in the high-dispersion condition and the
EQW
strategy in the
low-dispersion condition. Thus, the simulation predicts that
when dominance is possible, one should see more processing
consistent with a
LEX
strategy
(e.g.,
attribute-based processing,
selective processing across attributes, and higher proportions
of processing on probabilities and the most important attri-
bute) in environments with high-dispersion in probabilities.
In contrast, in low-dispersion environments, one should ob-
serve more alternative-based processing, more consistent
processing, and a lower proportion of processing of probabil-
ities and the most important attribute, consistent with strate-
gies like the
EQW
rule. The prediction
is
based on the assump-
540
J. PAYNE, J. BETTMAN, AND E. JOHNSON
1
a
u
to
k.
u
o
<
cc
WADD
0.9-
0.8-
0.7-
0.6-
0.5
0.4-
0.3
-
0.2
0.1-
0
EBA
+
WADD
o
EBA
+
WADD
*
D
MCD
MCD
•LEXSEMJ
\
LEXSEMI
\
EBA a. EQW
EBA
+
MCD
|
EBA
+ MCD*
h
—Low-Dispersion
Dominant
High-Dispersion
Dominant
H4-
RAND
0.9
0.8 0.7 0.6 0.5 0A 0.Z 0.2
Relative Effort (%WADD)
Figure 1. Effort/accuracy trade-offs for various decision strategies in the low-dispersion and high-
dispersion dominance-possible environments.
tion that people are sensitive
to the
relative accuracy
of strat-
egies
in
different
contexts,
as well as being aware of differences
in relative effort.
In addition
to
this prediction,
a
more subtle prediction
can
also
be
made. Note that
if
one uses
the
equal weight strategy
in
a
low-dispersion environment
and the
LEX
strategy under
high dispersion, roughly equal accuracy can be attained. How-
ever,
less
effort
is
required in the high-dispersion environment.
Thus,
if
subjects desire relatively high levels
of
accuracy,
the
simulation would predict that accuracy levels would
not
vary
across dispersion conditions,
but
that effort levels would
be
lower
for
the high-dispersion condition.
The simulation results discussed
so far
have assumed
as a
first approximation that
all
EIPs require
an
equal amount
of
effort
to
execute. Prior work
by
Johnson
and
Payne (1985)
suggested that such
an
assumption
was
sufficient
for the
simplified decision tasks they studied. However,
it
does seem
reasonable that some EIPs may take more time
to
execute,
or
be more effortful, than others.
In
another study, Bettman
et
al.
(1987) used counts
of
elementary operations (EIPs)
to
Table
2
Simulation Results for
Accuracy
of Heuristics Under Time
Pressure
Strategy
WADD
EQW
SAT
MCD
LEX
LEXSEMI
EBA
EBA+WADD
EBA+MCD
Processing
form
Alternative
Alternative
Alternative
Attribute
Attribute
Attribute
Attribute
Mixed
Attribute
Processing
selectivity
No
No
Yes
No
Yes
Yes
Yes
Yes
Yes
Task environment
Dominance possible
Low dispersion
LTP
.91"
.88
.38
.58
.70
.71
.70
.86
.74
MTP
.80
.82*
.34
.49
.69
.66
.68
.79
.65
STP
.28
.72'
.30
.23
.47
.40
.49
.43
.44
High dispersion
LTP
.91*
.66
.32
.44
.90
.87
.76
.86
.67
MTP
.80
.65
.34
.35
.90"
.83
.73
.82
.60
STP
.28
.55
.23
.17
.59
.49
.65'
.48
.49
Dominance not possible
Low dispersion
LTP
.90°
.41
.03
.03
.69
.63
.63
.73
.35
MTP
.IT
.34
.04
-.01
.68
.59
.60
.66
.32
STP
.12
.26
.06
-.02
.48'
.43
.48'
.27
.27
High dispersion
LTP
.92"
.24
.07
.04
.90
.76
.67
.75
.40
MTP
.82
.25
.05
.03
.90*
.75
.67
.74
.41
STP
.24
.18
.04
.02
.60
.51
.61"
.43
.36
Note. The 95% confidence interval width for the accuracy values is ± .029. LTP = low time pressure. MTP = moderate time pressure. STP =
severe time pressure,
WADD
= weighted additive strategy,
EQW
= equal weight strategy,
SAT
= satisficing strategy,
MCD
= majority of confirming
dimensions strategy, LEX = lexicographic strategy,
LEXSEMI
= lexicographic semi-order strategy,
EBA
= elimination by aspects strategy.
EBA+WADD
= combined elimination by aspects plus weighted additive strategy,
EBA+MCD
= combined elimination by aspects plus majority of
confirming dimensions strategy.
a
The most accurate strategy for each task environment.
ADAPTIVE DECISIONS541
predict measures of decision effort such as the total time
required to make a decision. The counts of
EIPs
required by
a specific strategy for a specific decision problem provided an
excellent (R2 = .81) prediction of overall decision latencies.
Consequently, estimates of the times associated with the EIPs
obtained by Bettman et al. were used to see if the trade-offs
between accuracy and effort for the different strategies ex-
amined in the present study would change. The major result
was
that all the heuristics become relatively less effortful when
the individual EIPs are weighted. However, the aforemen-
tioned key relations between aspects of processing and the
context variable of low and high dispersion are essentially
unchanged when weighted effort counts were used in place of
the equal weighted assumption. The relative performance of
the various strategies was almost identical.
Time
pressure
results.
The time pressure results are shown
in Table 2. Time constraints clearly have differential effects
on the various rules. The
WADD
rule, for example, shows a
marked reduction in accuracy from the baseline value of 1.0
under no time pressure to an average accuracy of only
. 12
under the most severe time constraint in the no-dominance,
low-dispersion condition. In contrast, the
EBA
heuristic shows
relatively little effect of time pressure. The average accuracy
across environments is reduced only from .69 with no time
pressure to .56 under severe time pressure. Interestingly, the
EBA
rule is actually the most accurate decision strategy in
three of the four environments for severe time pressure. The
LEX
rule also holds up well under time pressure. It appears
that strategies involving an initial processing of all alternatives
using a limited set of attributes do well under severe time
pressure. On the basis of
the
simulations, it seems important
under high time pressure
to
use a choice strategy that processes
at least some information about all alternatives as soon as
possible. However, note that in one decision environment
(dominance possible, low dispersion in weights), the alterna-
tive simplification strategy provided by the equal weight rule
is superior for even the most severe time constraint studied.
Implications of the Simulation
The simulation results indicate what a decision maker
might do to adapt to various decision environments. The
results clearly suggest the possibility that a decision maker
might maintain a high level of accuracy and minimize effort
by using a diverse set of
heuristics,
changing rules as contexts
and time pressures change.
Obviously, the simulation results have to be interpreted
with some caution. Although the results appear to be robust,
both the measures of effort and the measures of accuracy
represent approximations. In addition, it is unlikely that
actual choice behavior involves a straightforward execution
of one choice strategy or another. As noted earlier, there is
evidence for mixtures of strategies being used (Payne, 1976).
The strategies represented in the simulation should be viewed
as prototypical strategies that can be used to hypothesize how
the form of information processing in decision making may
shift as a function of task and context demands.
Despite these limitations, the simulation work provides
insights into how processing might change if efficient accu-
racy-effort trade-offs were desired. The simulation results for
the context variable, dispersion in probabilities, suggest that
when dominated alternatives are possible, more attribute-
based processing, more selective processing across attributes
and alternatives, and a higher proportion of processing on
probabilities and the most important attribute should be
observed in the high-dispersion rather than in the low-disper-
sion condition. Such aspects characterize rules, like the
LEX
rules,
that are relatively accurate with substantial effort savings
in the high-dispersion environment. In addition, it was noted
earlier that individuals should be able to attain similar levels
of accuracy in both low- and high-dispersion environments,
but they should be able to do so with less effort in the high-
dispersion setting.
The simulation also suggests that strategies characterized
by attribute-based processing and selectivity in processing,
particularly across attributes, should be more effective under
severe time pressure. Strategies such as
LEX
and
EBA,
which
maintain accuracy relatively well under heavy time pressure,
also are characterized by a greater proportion of processing
on probabilities and the most important attribute.
The foregoing simulation work could
be
validated in several
ways.
One method, used in Bettman et al. (1987), as noted
earlier, is to show that counts of the elementary operations
generated by the simulation could be used to predict effort-
related behaviors such as the total time required to make a
decision or self-reports of cognitive effort. Another approach
to validation would be to show that adaptivity in information
processing shown by human decision makers, when free to
select any strategy, was in the general directions predicted by
the simulation. The next two experiments take this second
approach to validation and investigate the adaptivity of
proc-
essing when actual decision behavior is examined.
Empirical Investigations: An Overview
The following experiments examine the degree of corre-
spondence between the actual adaptivity shown by human
decision makers and the adaptive processing patterns (strate-
gies) implied by the simulation
results.
Specifically, we ask (a)
to what extent do people vary their information processing
behavior
as
a function of context effects such as the dispersion
of probabilities and task effects such as time pressure?; and
(b) are these changes in processing in the directions suggested
by the simulation? One important feature of these experi-
ments is the use of
a
complete within-subjects design. Such a
design provides a strong test of adaptivity, because the subject
would be expected to switch strategies from one trial to the
next.
As outlined earlier the simulation results provide a fairly
clear picture of an adaptive decision maker. If decision makers
adapt as suggested by the simulation, there should be a
relation between the dispersion of probabilities and various
aspects of processing. In particular, more attribute-based proc-
essing, greater selectivity in processing across attributes and
alternatives, and a greater proportion of processing devoted
to probabilities and the most important attribute are expected
in a high-dispersion environment. Such shifts in processing
as a function of context would indicate that people are sensi-
tive to changes in choice environments that affect
the
accuracy
of strategies and not just to changes that affect processing
542J. PAYNE, J. BETTMAN, AND E. JOHNSON
demands. The reason is that the relative accuracy of rules
varies across contexts (dispersion conditions), but the relative
effort required by the rules does not. Studies showing contin-
gent processing due to task complexity (e.g., changes in num-
bers of alternatives and attributes) are fairly common (see
Payne, 1982); studies showing processing changes due to
context variables, and hence implicitly some concern for
accuracy, are much less common (however, see Busemeyer,
1985;
Russo & Dosher, 1983).
The task variable examined is the presence or absence of
time pressure. The simulation results also indicate changes in
aspects of processing under severe levels of time pressure. In
particular, more attribute-based processing, greater selectivity
in processing, and a greater proportion of processing focused
on probabilities and the most important attribute might be
expected.
Other work on time pressure reinforces these predictions.
For example, Ben Zur and Breznitz (1981) identified at least
three ways in which people may respond to time constraints.
One way to cope with time pressure is to process only a subset
of the most important information, an idea referred to as
"filtration" (Miller, 1960). Another way to cope with time
pressure is to "accelerate" processing (Ben Zur & Breznitz,
1981;
Miller, 1960) by trying to process the same information
at a faster rate. Finally, one could shift processing strategies.
At the extreme, this could involve random choice, or "avoid-
ance"
(Ben Zur & Breznitz, 1981; Miller, 1960). A less ex-
treme form of contingent processing would involve a shift
from a more effortful rule, such as the additive rule, to a less
effortful rule, like
EBA.
The simulation results indicate that
such a strategy shift could maintain relatively high levels of
accuracy, even under severe time pressure.
The hypothesis of filtration is supported in other studies.
For example, Wright (1974) reported that the most important
information in
a
judgment task was given more weight under
time pressure. Ben Zur and Breznitz (1981) reported shifting
to the use of more important information under
time
pressure.
Furthermore, Ben Zur and Breznitz also found that subjects
spent less time looking at individual items of information
under time pressure. They concluded that combining filtra-
tion and limited acceleration "can be viewed as the optimal
decision making strategy when the [decision maker] is con-
fronted with information overload while pressured by dead-
lines"
(p. 102). Note that filtration can be characterized by
greater selectivity across alternatives and attributes and by
greater emphasis on the most important attribute.
The foregoing hypotheses deal with processing information.
Accuracy under time pressure was addressed by Zakay and
Wooler
(1984).
They found that under time pressure a smaller
proportion of the observed choices consisted of the alternative
that had been measured as having the greatest additive value.
In sum, the simulation and prior empirical research lead to
several hypotheses. Both higher dispersion in probabilities
and higher time pressure are expected to lead to greater use
of attribute-based processing, greater selectivity across attri-
butes and alternatives, and greater focus of processing on
probabilities and the most important attribute. In addition,
there should be no difference in accuracy for different levels
of dispersion, but there should be less effort under high
dispersion. Under high time pressure, accuracy should be
lower and information should be processed more rapidly.
These predictions could be derived in at least two ways.
One could assume that subjects have explicit accuracy and
effort feedback and make conscious trade-offs of accuracy
and effort. Alternatively, one can assume that subjects have
general knowledge of the properties of a reasonable strategy
and of task environments
(e.g.,
see Reder, 1987). Then, in the
course of making
decisions,
subjects generate process feedback
(Anzai & Simon, 1979). That is, subjects can ascertain how
effortful their strategy was and how closely it resembled their
notion of what a "good" strategy should entail. Such process
feedback can be generated without explicit feedback about
outcomes. Subjects would then adapt based upon their general
knowledge and process feedback (Reder, 1987).
In the present article, the second mode of arriving at the
predictions was used. Subjects are assumed to have ideas
about the characteristics of reasonable strategies, to generate
process feedback, and to adapt by using the process feedback
and seeing how well their strategy as executed matches their
view of
a
"reasonable" strategy. In the experiments reported,
the choices are made without explicit feedback regarding
accuracy for two major reasons: (a) The majority of common
decision problems do not offer the opportunity to receive
immediate and clear feedback about the quality of choice
(Einhorn, 1980); (b) to the extent adaptivity is exhibited in
situations without explicit accuracy feedback, it provides
strong evidence for adaptive decision processing. That is, it
would suggest that adaptivity may be crucial enough to deci-
sion makers that they will guide themselves to it without the
need for an external prod in the form of explicit feedback.
Experiment 1
In this experiment, the extent of adaptivity to changes in
the dispersion of probabilities and to the presence or absence
of time pressure was tested. The main hypotheses are that
people
will
adapt their behavior
to
the demands of
the
decision
environment in accordance with the general patterns identi-
fied by the simulation.
Method
Subjects. A total of 16 undergraduates at Duke University served
as subjects. Participation in the experiment earned credit toward
fulfillment of a course requirement. In addition, the subjects had a
possibility of winning as much as $9.99, depending on their actual
choices.
Stimuli. The stimuli were sets of four risky options. Each option
in a set offered four possible outcomes (attributes). The outcomes
involved possible payoffs ranging from $0.01 to $9.99. Every option
in a set was defined in terms of the same four outcome probabilities.
The probabilities for any given outcome ranged from .01 to .96, with
the constraint that the four outcome probabilities summed to one.
Ten sets of high dispersion in probabilities (weights) options and
10 sets of low dispersion options were generated, with dominated
options allowed in all sets. In terms of the design used for the
simulation study, sets of options were sampled from the low-disper-
sion, dominance-possible and high-dispersion, dominance-possible
conditions. To illustrate, one low-dispersion set of gambles had
ADAPTIVE DECISIONS543
probabilities of
.22,
.26, .24, and .28 for the four possible outcomes.
Gamble A provided payoffs of
$8.73, $7.83,
$1.74, and
$8.91,
re-
spectively, for the four outcomes. Gamble B provided payoffs of
$7.54,
$4.64,
$5.11,
and
$6.73.
Gambles C and D had different, but
similar types of payoffs. One high-dispersion set of options, on the
other hand, had probabilities of
.20,
.04, .07, and .69. The payoffs for
Gamble A were $6.86, $1.18, $4.96, and $0.84, respectively. Gamble
B had payoffs of
$1.38,
$3.34, $8.49, and
$2.91,
respectively. Again,
Gambles C and D were similar. Overall, the sets of options in the
low- and high-dispersion conditions were equivalent in terms of their
average expected values.
The 20 sets of options (10 low dispersion,
10
high dispersion) were
randomly presented under two time pressure conditions. One in-
volved no explicit time pressure. Subjects could take as much time
as they wished to acquire information about probabilities and payoffs
and make a decision. The other condition involved a 15-s time
constraint. In this condition, a clock was shown in the upper-left
corner of the display with the information about the gambles (de-
scribed more fully below). As the 15 s passed, the clock slowly
disappeared. At 15 s, a beep sounded, the subject could not acquire
additional information, and he or she was instructed to make a
choice. For comparison, a pilot study indicated that subjects took
about 50 s, on average, when under no time pressure. In the experi-
ments reported below, subjects averaged approximately 44 s per trial
when under no time pressure.
There were 40 decision problems (2 context conditions x 2 time
pressure conditions x 10 replications) presented to each subject in
random order, with the same random order used for all subjects. A
complete experimental session took 30-45 min for each subject.
The Mouselab methodology. Information acquisitions, response
times,
and choices were monitored using a software system called
Mouselab
(Johnson, Payne, Schkade, & Bettman, 1986). This system
uses an IBM personal computer, or equivalent, equipped with a
"mouse" for moving a cursor around the display screen of the
computer. The stimuli are presented on the display in the form of a
matrix of available information. The first row of boxes contained
information about the probabilities of the four outcomes. The next
four
rows
of boxes contained information about the payoffs associated
with the different outcomes for each alternative, respectively. At the
bottom of the screen were four
boxes
that were used to indicate which
alternative was most preferred. Figure 2 is an example of
a
stimulus
display with one box opened, and with the time pressure clock part
way through the countdown.
When a set of options first appears on the screen, the values of the
payoffs and probabilities are "hidden" behind the labeled boxes. To
open a particular box and examine the information, the subject has
OUTCOME
1
OUTCOME
2
OUTCOME
3
OUTCOME 4
Figure 2. Example of stimulus display using the Mouselab system
with time-pressure clock.
to move the cursor into the box. The box immediately opens and
remains open until the cursor is moved out of the box. Only one box
can be open at a time.
The Mouselab program records the order in which boxes are
opened, the amount of time boxes are open, the chosen option, and
the total elapsed time since the display first appeared on the screen.
Response times are recorded to an accuracy of
l/60th
of a second.
The Mouselab methodology comes close to the recording of eye
movements in terms of speed and ease of acquisitions, while mini-
mizing instrumentation cost and difficulty of use for both subject and
experimenter. An analysis of the time necessary to move the mouse
between boxes in our displays using Fitts's Law indicates that one
could move between boxes in less than 100 ms (Card et al., 1983).
This
suggests
that the time to acquire information using the
Mouselab
system is limited mainly by the time it takes to think where to point,
rather than by the time it takes to move the mouse. Although the use
of such a process-tracing system itself could possibly induce a change
in strategies, recent research using the
Mouselab
system has replicated
findings (e.g., preference reversals) found in studies that do not use
such a process-tracing mechanism (Johnson, Payne, & Bettman, in
press).
There might also be concern that the tabular format is unnat-
ural. However, tables of information appear in such magazines as
Consumer
Reports,
and many computer-based decision aids also use
a similar format.
Dependent
measures.
Information acquisition and decision behav-
ior can be characterized in many
ways.
One can examine the amount
and sequence of information acquired, and the time spent acquiring
information (Klayman, 1983). To examine the aforementioned hy-
potheses, we consider seven measures of
aspects
of decision process-
ing. One important aspect is the total amount of processing. One
measure of amount is the total number of times information boxes
were opened for a particular decision, denoted acquisitions
(ACQ).
A
second measure, which is related to the amount of processing effort
and is also directly relevant to the hypothesis of acceleration of
processing under time pressure, is the average time spent per item of
information acquired
(TPERACQ).
The next several measures reflect the relative attention devoted to
specific types of information, and hence are relevant to characterizing
selectivity in processing and the related concept of filtration. One
measure, denoted
PTMI,
is the proportion of the total time acquiring
information that was spent in boxes involving the most important
attribute of a particular decision problem. The attribute (outcome)
with the largest weight (probability of occurrence) was defined to be
the most important attribute. The other measure, denoted
PTPROB,
is
the proportion of time spent on probability information as opposed
to information about payoff values.
The next two measures are the variances in the proportions of time
spent on each alternative
(VAR-ALTER)
and on each attribute
(VAR-
ATTRIB).
Such variances are related to selectivity. As described earlier
more compensatory decision rules
(e.g.,
WADD, EQW,
and
MCD)
imply
a pattern of information acquisition that is consistent (low in vari-
ance) across alternatives and attributes; in contrast, noncompensatory
strategies, like
EBA, LEX,
and
SAT,
imply more variance in processing.
A final measure of processing characterizes the sequence of infor-
mation acquisitions relating to outcome
values.
Given the acquisition
of a particular piece of information, two particularly relevant cases
for the next piece of information acquired involve the same alterna-
tive but different attribute (an alternative-based, holistic, or Type 1
transition), and the same attribute but a different alternative (an
attribute-based, dimensional, or Type
2
transition). A simple measure
of the relative amount of alternative-based (Type 1) and attribute-
based (Type 2) transitions is provided by calculating the number of
Type
1
transitions minus the number of Type 2 transitions divided
by the sum of Type 1 and Type 2 transitions (Payne, 1976). This
measure of the relative use of alternative-based versus attribute-based
544J. PAYNE,
J.
BETTMAN,
AND E.
JOHNSON
processing, denoted
PATTERN,
ranges from
a
value
of -1.0 to +1.0.
A more positive number indicates relatively more alternative-based
processing,
and a
more negative number indicates relatively more
attribute-based processing.
In addition
to
these seven measures
of
processing,
a
measure
of
relative accuracy, defined
as
above
in
terms
of
EV
maximization
and
random choice, was developed and denoted
GAIN.
These measures
can be
related directly
to the
hypotheses outlined
previously. Higher dispersion in probabilities and higher time pressure
should lead
to
lower values
of
PATTERN
(more attribute-based proc-
essing); higher values of
VAR-ALTER
and
VAR-ATTRIB
(greater selectiv-
ity);
and
higher values
of
PTPROB
and
PTMI
(greater focus
on
proba-
bilities
and the
most important attribute).
In
addition, there should
be fewer acquisitions
(ACQ)
and lower
TPERACQ
under high dispersion
(less processing effort),
and
TPERACQ
should
be
lower under time
pressure. Finally,
GAIN
should
be
similar across levels of dispersion,
but lower under high time pressure.
Procedure. Each subject
was
told that
the
purpose
of
the experi-
ment was
to
understand how people make decisions, that there were
no objectively "right"
or
"wrong" choices,
and
that
the
"best" action
was
to
choose that risky option they would most prefer
to
play.
Subjects were also told that
at the end of
the experiment
a
decision
problem would
be
selected
at
random,
and the
option they
had
chosen would be played by randomly generating
an
outcome accord-
ing
to the
probabilities
for
that option. They would
be
allowed
to
keep
whatever money they
won.
Thus,
the subjects could win between
$0.01 and
$9.99,
depending on their choices and the random process.
Subjects then were instructed
on
the Mouselab information acqui-
sition system
and
allowed
to
practice
its use.
Next, they were told
that they would
be
presented with
a
series
of
decisions involving
choices among risky options
and
that some decisions would involve
an explicit time constraint, whereas
for
other decision problems they
could take as long as they wished.
Results
Overview.
The
main focus
in the
results concerns
how
people adapt
to the
task manipulation
of
time pressure
and
the context manipulation of dispersion in
probabilities.
Effects
are examined
for
the foregoing four main types
of
dependent
measures: amount
of
processing, selectivity
in
processing,
pattern
of
processing,
and
relative accuracy.
To provide the strongest possible test of adaptivity,
a
within-
subjects experimental design
was
used. Subjects, however,
may have
to
experience several examples
of
different types
of
decision problems before settling
on a
preferred strategy
for a
particular type
of
problem. Consequently,
the
results
are
presented both
for the
block
of 20
decision problems seen
first
by the
decision maker,
and the
block
of the
last
20
decisions. Problems corresponding
to
each
of
the four time-
pressure-dispersion combinations were distributed essentially
equally over
the
two blocks.
A multivariate analysis. Given
the
likely correlations
among
the
various process measures,
the
data were first
analyzed using
a
multivariate analysis
of
variance with three
within-subject factors (dispersion, time pressure,
and
block).
The analysis included
the
aforementioned seven process
measures plus the measure of relative
accuracy,
denoted
GAIN.
The means
for
these measures
are
presented
in
Table
3.
Overall,
the
main effects
of
dispersion,
F(&,
606) =
21.54,
time pressure, F(8,606)
=
62.98, and block, F\S,606)
=
6.28,
were highly significant
(p <
.001). There
was a
significant
dispersion by time pressure interaction,
JF(8,
606)
=
4.05,
p <
.001,
and an interaction of time pressure with block,
F(%,
606)
=
5.38, p < .001.
There
was no
interaction
of
block
and
dispersion, F(8, 606)
= 1.20,
ns, although there was
a
signifi-
cant three-way interaction
of
block
by
dispersion
by
time
pressure, F(8, 606)
=
3.05,
p <
.01.
To more fully characterize
the
effects
of
dispersion, time
pressure,
and
block, separate univariate analyses
of
variance
were conducted
for
each
of the
dependent measures.
The
results presented
in
Table
3
will first
be
discussed
in
terms
of
dispersion
in
probabilities
(a
context effect), then
for
time
pressure (a task effect), and then briefly
for
block. Interactions
involving block
are
considered where relevant. Within each
section, the results are presented
for
the amount of processing
measures (ACQ and
TPERACQ),
then for the selectivity measures
(PTMI, PTPROB, VAR-ATTRIB, and VAR-ALTER), and
then
the
PATTERN
measure.
A
summary
of
results
at the
individual
subject level is briefly presented,
and
then
the GAIN
measure
is discussed.
Effects
of dispersion. High dispersion was predicted to lead
to fewer acquisitions
(ACQ),
less time
per
acquisition
(TPER-
ACQ),
greater focus
on the
most important dimension
(PTMI)
and on probabilities
(PTPROB),
higher selectivity
for
attributes
Table
3
Summary of
Process
Measures
ant/GAIN
as a
Function of Time
Pressure,
Context, and
Decision
Block:
Experiment
1
Dependent
measure
ACQ
TPERACQ
PTMI
PTPROB
VAR-ALTER
VAR-ATTRIB
PATTERN
GAIN
No time pressure
Low dispersion
Block
1
46.6
.754
.322
.232
.010
.011
-.111
.694
Block
2
35.3
.668
.335
.252
.011
.013
-.107
.609
High dispersion
Block
1
35.1
.650
.419
.245
.011
.021
-.319
.585
Block
2
27.6
.622
.417
.285
.016
.035
-.329
.611
Time pressure
=
15
s
Low dispersion
Block
1
18.3
.492
.347
.283
.012
.013
-.103
.269
Block
2
17.6
.487
.352
.297
.012
.018
-.164
.616
High dispersion
Block
1
15.6
.507
.446
.281
.013
.031
-.446
.398
Block
2
15.4
.493
.480
.289
.012
.035
-.408
.643
Note,
ACQ
=
number
of
information boxes examined,
TPERACQ
=
time
per
information acquisition,
PTMI
=
proportion
of
time
on the
most
important attribute,
PTPROB
=
proportion
of
time
on the
probability information,
VAR-ALTER
=
variance
in the
proportion
of
time spent
on
each alternative,
VAR-ATTRIB
=
variance
in the
proportion of time spent
on
each attribute (including both payoff and probability information).
PATTERN
=
index
reflecting
relative
amount
of
attribute-based
(-) and
alternative-based (+) processing,
GAIN
=
relative accuracy of choices.
ADAPTIVE DECISIONS545
(VAR-ATTRIB)
and
alternatives
(VAR-ALTER),
and
more attri-
bute-based processing (lower values
of
PATTERN).
There was
no effect expected
on
GAIN.
The
dispersion manipulation
generally showed these effects.
As predicted, there was
a
significant difference between low
and high dispersion for both the number of acquisitions
(ACQ)
(M
=
28.90 vs.
M=
23.82),
F(\,
617)
=
36.39,
p
<
.001,
and
time per acquisition
(M=
.60 vs.
M=
57),
F(\,
617)
=
7.21,
p
<
.001. Less effort was used
in
reaching
a
decision
for the
high-dispersion problems. The only significant dispersion
by
time pressure interactions were
for
ACQ,
F(l, 617)
=
12.91,
p
<
.001,
and
TPERACQ
F(l, 617)
=
12.56,
p
<
.001.
The effect
of dispersion is greater
in the
no-time-pressure problems.
The pattern
of
results
for the variables related
to
selectivity
also
was
largely
as
predicted. There was more focus
on the
dimension associated with the largest probability
(PTMI)
with
high dispersion
of
probabilities
(M
=
.34
vs.
M
= .44),
F(l,
617)
=
92.34,
p <
.001.2
However, contrary
to
prediction,
there was
no
significant difference
in the
proportion
of
time
spent
on
probabilities
(M = .27 vs. M =
.27),
F(\, 617) =
1.34,
ns.
Both
the
variance
in
processing across attributes
(VAR-ATTRIB)
and across alternatives
(VAR-ALTER),
on the
other hand, increased significantly
for the
high-dispersion
problems
(M =
.014 vs. .030),
7=1(1,
617)
=
119.13,
p <
.001,
(M
=
.011 vs.
M =
.013),
F(l,
617)
=
7.14,
p <
.05, respec-
tively. Thus, one effect of increased dispersion was to increase
the amount
of
selectivity
in
processing, which
is
consistent
with
the use of
heuristic processes such
as the
LEX
or
EBA
strategies.
The hypothesis
of a
shift
in
strategies
due to the
context
manipulation
is
also supported
by the
fact that
the
amount
of attribute-based processing (shown
by
negative values
of
PATTERN)
increased significantly
as the
dispersion
of
proba-
bilities increased
(M = -.12
vs. -.37),
F(l,
613)
=
54.60,
p
<
.001.
This result
for
PATTERN
is consistent with greater use
of strategies such
as
EBA
or the
lexicographic rule
for
high
dispersion
in
probabilities.
The foregoing results are averaged across subjects. Individ-
ual subjects showed patterns similar
to
those reported above.
For example, 86%
of
the subjects acquired less information
in
the
high-dispersion condition. Seventy-five percent spent
less time
per
acquisition
for
high-dispersion problems.
One
hundred percent of
the
subjects spent more time
on
the most
important attribute, 56% spent more time
on
probabilities,
100%
had greater variance in processing across attributes, and
82%
had greater variance
in
processing across alternatives
for
the high-dispersion problems. More attribute-based process-
ing was shown
by
82%
of
the subjects
in the
high-dispersion
condition than
in the
low-dispersion condition. Thus, both
group and individual analyses show adaptivity in process as
a
function of context.
Finally,
as
expected,
the
average
GAIN
scores
for the low-
dispersion and high-dispersion problems did
not
differ
signif-
icantly
(M = .54 vs. M = .56),
7=1(1,
617) = .06, ns. The
accuracy
of
the processes used
in the two
contexts
was ap-
proximately the same.
The simulation results suggest that
an
adaptive decision
maker could take advantage of changes in context to maintain
accuracy with substantially less processing effort. The experi-
mental results clearly demonstrate
a
shift
in
processing strat-
egies with variation in context. People demonstrated an ability
to shift processing
to
take advantage
of
problem structure
so
as to reduce processing load while maintaining accuracy. The
empirical support
for
this relatively subtle prediction
of
the
simulation provides strong support
for
the current approach.
Previous work
on
contingent decision behavior
has
most
clearly demonstrated
a
sensitivity to effort, such as the effects
of variations
in the
number
of
alternatives
or
attributes
(Payne, 1982).
The
present work demonstrates
an
ability
to
maintain accuracy even under
a
subtle change
in
context,
as
well as sensitivity
to
effort.
Effects of time
pressure.
High time pressure was predicted
to lead
to
lower values
for
ACQ and
TPERACQ;
higher values
for
PTMI,
PTPROB,
VAR-ATTRIB,
and
VAR-ALTER;
a
lower value
for
PATTERN;
and
a
lower value for
GAIN.
As expected, subjects acquired fewer items
of
information
(ACQ)
in the
time-constrained choice environments
(M =
35.98
vs. M =
16.74),
F(l, 617) =
378.44,
p <
.001. This
finding was qualified by
a
block by time pressure interaction,
F(,
617)
=
20.15,
p
<
.001,
which showed that the amount of
information acquisition did not vary over blocks
in
the high-
time-pressure condition,
but
that the amount
of
information
acquired
in the
second block was less than that acquired
in
the first block under no time pressure.
One major hypothesis regarding time pressure and decision
making
is
that people adapt to time constraints by accelerating
their processing.
The
results
for the
time
per
acquisition
variable
(TPERACQ)
indicate that people did process informa-
tion significantly faster under time pressure (M
=
.67
s
vs.
M
=
.48 s),
7=1(1,
617) =
217.36,
p <
.001.
A
block
by
time
pressure interaction, F(l, 617)
=
3.87,
p <
.05, showed that
a
decrease over blocks only occurred
in the
no-time-pressure
condition. These results are consistent with those
of
Ben Zur
and Breznitz (1981).
The results for the variables related
to
selectivity and filtra-
tion also supported
the
hypotheses.
The
proportion
of
time
spent
on the
most important attribute (most likely outcome;
PTMI)
was significantly greater under time pressure
(M
=
.37
vs.
M =
.41),
7=1(1,
617)
=
9.83,
p <
.01.
The
proportion
of
time spent
on
probabilities
(PTPROB)
was also greater
for
the
time-pressured problems
(M= .25
vs.
M=
.29), F(l, 617)
=
18.14,
p
<
.001,
clearly supporting the filtration hypothesis.
Greater selectivity
in
processing under time pressure
was
also indicated by greater variance
in
processing the attributes
(VAR-ATTRIB)
with a time constraint (M = .019
vs.
M = .024),
7=1(1,
617)
=
5.98,
p
< .05. Interestingly, there was no effect of
time pressure on the amount of variance
in
processing across
alternatives,
(M = .30 vs. M = .31),
7=1(1,
617) = .06, ns.
Although the lack
of
results
for
VAR-ALTER
is not as hypoth-
esized, the results for
PTMI, PTPROB,
and
VAR-ATTRIB
support
the notions of increasing filtration and selectivity under time
pressure.
Further evidence for a shift in information processing strat-
egy
as a
function
of
time pressure
is
provided
by the
results
2 Analyses
of
variance
for
PTMI
and
PTPROB
were also
run
with
both measures transformed by an arc sine transformation. The results
were essentially the same, and the untransformed results are reported
for simplicity,
546J. PAYNE,
J.
BETTMAN,
AND E.
JOHNSON
for PATTERN of processing. Under time constraint, processing
became marginally more attribute based
(M =
-.22 vs.
M =
-.28),/^l, 617)
=
3.55, p=.06.
To summarize,
the
results showed that people adapted
to
time pressure by accelerating processing, increasing the selec-
tivity of
processing,
and moving toward more attribute-based
processing. The latter two effects, taken together, are consist-
ent with
the
greater
use of
heuristics like
the
LEX
or
EBA
strategy under time pressure.
Again, the means for each individual showed that a majority
of subjects responded
in the
same directions
as
indicated
by
the group analysis.
For
instance, 100%
of
the subjects accel-
erated processing under time constraint (ACQ and TPERACQ).
Sixty-nine percent showed evidence
of
filtration
as
indicated
by
PTMI
and
VAR-ATTRIB.
Sixty-three percent demonstrated a
greater focus
on
probabilities, 44% showed higher values
for
VAR-ALTER,
and 63%
demonstrated more attribute-based
processing under time constraints.
In addition to time pressure effects on processing, there was
a clear impact
of
time constraint
on
accuracy. Relative accu-
racy was lower under time pressure
(M = .62
vs.
M
=
.48),
F(\,
617)
=
8.32,
p <
.01.
An
examination
of
the pattern
of
means
in
Table 3, however, makes
it
clear that the decrement
in performance is concentrated
in
the responses
to
the earlier
(first block) problems involving time pressure.
By the
latter
block, performance
had
improved
to
levels similar
to
those
obtained
in the
no-time-pressure condition,
as
verified
by a
significant block
by
time pressure interaction,
F(l, 617) =
10.73,
p<
.01.
Discussion
The central conclusion from
the
results
of
Experiment
1
is
that people exhibit
a
substantial degree
of
adaptivity
in
their
decision behavior. Decision processes were sensitive
to a
context variable that influences
the
relative accuracy
of
heu-
ristics. Decision processes were also sensitive to the important
task variable
of
time pressure. Across
a
variety
of
dependent
measures,
the
pattern
of
results supported
the
predictions.
These findings
of
adaptivity
are
particularly strong
in
that
they were exhibited
by the
same subjects
on
different trials.
Finally, the general pattern
of
adaptive behavior was consist-
ent with the simulation results.
The time-pressure results supported
the
hypotheses that
increased time pressure wold result
in (a)
acceleration
of
information processing,
(b)
filtration
of
information
to be
processed,
and (c) to a
lesser extent, changes
in the
choice
heuristics used
to
make
a
decision. Prior research
has sup-
ported
the
acceleration
and
filtration hypotheses,
but the
present experiment also suggests changes in information proc-
essing strategies as
a
function of
time
pressure.
The existence
of at
least three ways
in
which people
can
adapt to time pressure leads to the following question: Is there
an ordering
to the
adaptive strategies people use
to
deal with
time pressure? That
is, do
people first
try to
deal with time
constraints through acceleration
and
perhaps filtration
of
processing? Selecting
an
alternative decision process
in re-
sponse
to
time pressure
may
only occur
if the
first
two
responses are
not
adequate. The next experiment investigates
that possibility
by
examining
a
case
of
less
severe time pres-
sure.
Experiment
2
This study examines
the
extent
and
direction
of
adaptive
decision processing when
the
amount
of
time pressure is less
severe than that investigated
in
Experiment
1.
Specifically,
one time pressure condition
in
this study used
a
25-s limit.
For comparison, and also for purposes of replication, a second
time pressure condition used
the
15-s limit used
in
Experi-
ment
1.
Furthermore, subjects
in
this study returned
for a
second day. During
the
second session,
the
experiment was
repeated,
but
with
the
time pressure level
set at 25 s if a
subject
had
received
15 s on the
first
day or set at 15 s if a
subject had 25
s
on the first day. The inclusion of
the
second
session was intended
to
explore how adaptivity
to
one choice
environment might influence adaptivity to
a
slightly different
choice environment. Finally, this study again examined
the
effects
of
dispersion
in
probabilities.
Method
Subjects.
A
total
of
28
undergraduate students served
as
subjects
in this experiment
in
return
for
course credit
and the
chance
to
win
money. Because this experiment involved two different experimental
sessions,
the
maximum amount
of
money that could
be won was
$19.98 ($9.99
for
each session).
Stimuli and
procedures.
The
stimuli
and
procedures used
in
this
study were essentially
the
same
as
those used
in
Experiment
1. For
the
first
session, subjects
were
randomly assigned to one of
two
groups:
time pressure
=
15
s
(Group
1) or
time pressure
= 25 s
(Group
2).
Owing
to
computer problems, cell sizes were unequal, with
16
subjects
in Group
1
and
12
subjects in Group
2.
One difference in instructions
from the previous experiment was that subjects were told how much
time was involved in the time pressure
trials.
After the end of the first
session, a gamble preferred
by
the subject
was
selected,
but not played.
The second session
had
the time pressure set
at
the level opposite
to
that received
on the
first
day.
Also,
the
order
of
the outcomes
and
alternatives
was
permuted
for the
sets
of
gambles
to
reduce
the
possibility that
the
subject would remember
the
particular choice
problems from the previous day.
Results
The measures
of
process
and
accuracy used
in
this study
were the same as those used in the previous experiment. Table
4 presents the means
for
each
of
the seven process measures
and
GAIN
as a
function
of
day, group, presence/absence
of
time pressure, and low versus high dispersion. The data were
analyzed with
a
five
within-subjects factor multivariate analy-
sis
of
variance (presence
of
time pressure, dispersion, block,
day,
and
level
of
time pressure:
25 s
vs.
15 s).
Subjects
was
treated as
a
factor nested within day and level.
The multivariate analysis
of
variance showed significant
effects
of
dispersion,
F(S, 2147) =
116.24,
p <
.001,
and
presence
of
time pressure, ^(8, 2147)
=
200.64,
p <
.001.
In
addition, the main effects of
day,
F(8, 2147)
=
21.42, level
of
time pressure, F(8, 2147)
=
18.16,
and
block, F{8, 2147)
=
23.63,
were
all
significant
(p
< .001).
The two-way interactions were generally significant as well.
Of most interest
were
a presence of time pressure
by
dispersion
ADAPTIVE DECISIONS547
Table
4
Summary of
Process
and
Accuracy
Results:
Experiment
2
Dependent
measure
ACQ
TPERACQ
PTM1
PTPROB
VAR-ALTER
VAR-ATTRIB
PATTERN
GAIN
ACQ
TPERACQ
PTMI
PTPROB
VAR-ALTER
VAR-ATTRIB
PATTERN
GAIN
Low
dispersion
50.8
.64
.29
.24
.010
.007
.00
.56
Low
dispersion
48.6
.58
.28
.22
.010
.006
.13
.66
Group
1
NTP
High
dispersion
42.1
.62
.37
.27
.013
.018
-.22
.59
NTP
High
dispersion
36.6
.56
.39
.26
.013
.020
-.11
.57
(N=
16)
TP
=
Low
dispersion
19.5
.48
.32
.27
.009
.013
-.03
.43
TP
=
Low
dispersion
27.7
.49
.27
.22
.008
.006
.22
.59
Day 1
15s
High
dispersion
17.2
.48
.43
.29
.013
.027
-.31
.42
Day 2
25
s
High
dispersion
24.2
.49
.41
.27
.014
.021
-.08
.52
Results
Low
dispersion
52.8
.64
.30
.19
.008
.005
.30
.75
Results
Low
dispersion
42.0
.57
.30
.19
.009
.005
.39
.74
Group
2
NTP
High
dispersion
45.2
.62
.39
.20
.010
.017
.00
.81
NTP
High
dispersion
36.4
.54
.39
.21
.011
.020
-.02
.75
(N=
12)
TP
=
Low
dispersion
28.8
.52
.32
.20
.008
.007
.33
.67
TP
=
Low
dispersion
21.0
.46
.30
.20
.009
.007
.45
.65
25
s
High
dispersion
27.7
.52
.41
.22
.010
.021
.03
.64
15
s
High
dispersion
19.0
.47
.43
.23
.011
.026
-.06
.58
Note.
NTP = no
time pressure.
TP =
time pressure,
ACQ
=
number
of
information boxes examined,
TPERACQ
=
time
per
information
acquisition,
PTMI
=
proportion
of
time
on the
most important attribute,
PTPROB
=
proportion
of
time
on the
probability information,
VAR-
ALTER
=
variance
in
the proportion of time spent
on
each alternative,
VAR-ATTRIB
=
variance in the proportion of time spent
on
each attribute
(including both payoff and probability information),
PATTERN
=
index reflecting relative amount
of
attribute-based
(-) and
alternative-based
(+) processing,
GAIN
=
relative accuracy of choices.
interaction, F(8, 2147)
=
54.13,
p <
.001,
a
presence
of
time
pressure
by
block interaction, F(S, 2147)
=
10.39,
p <
.001,
and
a
day
by
level
of
time pressure interaction, F\S, 2147)
=
54.13,
p
<
.001.
The
first
two interactions are consistent with
those obtained
in
Experiment
1. The
latter interaction
sug-
gests that
it did
matter whether
a
subject received
the 15-s
(severe time pressure) problems
on the
first
or
second day.
Analyses
of
simple effects within
the 15-s and 25-s
time-
pressure groups were performed. Because
the
major focus
of Experiment
2 was on
time-pressure effects, those results
are discussed first, followed
by
results
for
dispersion. Then
effects of day and level are briefly considered. The predictions
for
the
various dependent variables
are the
same
as in
Experiment
1.
Time
pressure
effects.
The
results most comparable
to
Ex-
periment
1,
of course, are the
first-day
results.
An examination
of the first-day process means
for
Group 1 (time pressure
=
15
s) and
Group
2
(time pressure
= 25 s),
reported
in
Table
4,
indicates that
the
previous findings
for 15 s did
replicate,
and that there
may be a
hierarchy
of
responses
to
levels
of
time pressure.
An analysis
of
simple effects
for the
variables related
to
amount
of
processing showed fewer acquisitions with time
pressure present
for
both the 15-s (M
=
46.43
vs.
M
= 18.38),
F\\, 2154)
=
507.01,
p <
.001,
and the 25-s conditions
(M =
49.03 vs.
M =
28.25),
F(l,
2154)
=
209.82,
p <
.001. There
was also less time per acquisition under time pressure
in
both
cases:
15-s
condition
(M = .63 vs. M = .48), F(l, 2154 =
394.92,
p <
.001;
25-s condition
(M =
.63 vs.
M =
.52),
F(l,
2154)
=
156.88,
p <
.001. Thus, both time pressure levels
show evidence consistent with acceleration
of
processing.
Analyses
of
simple effects
for the
amount
of
filtration
and
selectivity
in
processing show similar patterns within the
15-s
and 25-s time-pressure conditions
for
the first day. However,
the results
for the 15-s
level
of
time pressure
are
generally
stronger. For the
15-s
level, there were effects of time pressure
on
PTMI
(M= .33 vs. M = .37), F(l, 2154) = 22.86, p < .001,
PTPROB
(M= .25 vs. M = .28), F(
1,2154)
= 11.76,p < .001,
and
VAR-ATTRIB
{M = .013 vs. M = .020), FO, 2154) = 37.37,
p<
.001.
There was no effect on
VAR-ALTER
(M
= .011
vs.
M
= .011),
fl(l, 2154) = .02, ns.
There
was an
effect
of
time
pressure for the 25-s level on
PTMI
(M = .34 vs. M = .37),
F(l, 2154) = 4.99, p < .05, and marginal effects for
PTPROB
(M
=
.20 vs.
M =
.21),
F(l,
2154)
=
3.71,
p
< .06,
and VAR-
ATTRIB
(M = .011 vs. M .014), F(l, 2154) = 3.66, p < .06.
There was
no
effect
on
VAR-ALTER
(M
=
.009 vs.
M =
.009),
F(l,
2154) = .26, ns.
Thus, there
is
evidence
for
selectivity
under both time pressure conditions, although there appears
to
be
more selectivity when time pressure
is
severe.
Of greatest importance
for
the hypothesis
of
a hierarchy
of
time pressure effects was
the
finding
of
a significant effect
of
time pressure
on
pattern
of
processing
in the
first
day for the
15-s condition
(M
=
-.
11
vs.
M = -.
17), F(
1,
2154)
=
4.86,
p
< .05,
with more attribute-based processing under time
548J. PAYNE,
J.
BETTMAN, AND
E.
JOHNSON
pressure.
In
contrast, however, there
was no
effect
of
time
pressure
on
pattern
of
processing
in the
25-s condition
(M
=
. 15
vs.
M
=
.
18), F(
1,
2154)
=
1.21, ns. Thus, we find evidence
of a shift toward more attribute-based processing under severe
time pressure,
but no
shift
in
processing with moderate time
pressure.3
Finally, accuracy (GAIN) was lower under time pressure
for
both
the
15-s(M=
.58
vs.
M=
.43),
F(l,
2154)
=
9.22,
p <
.01,
and
25-s conditions
(M=
.77 vs.
M=
.66), F(l, 2154)
=
4.95,
p
<
.05.
Although not shown
in
Table 4, the detrimental
effect of time pressure on GAIN was again greatest for the first
block
of
trials,
particularly
for
the 15-s condition
(M
= .29).
Dispersion
effects.
The
pattern
of
effects
for
dispersion
in
probabilities
for
Experiment
2
was similar
to
that found
for
Experiment
1.
With respect
to
amount
of
processing,
for the
15-s condition there was
an
effect of dispersion
on
ACQ
(M =
35.19 vs.
M =
29.67),
F(l,
2154)
=
24.54,
p <
.001,
and a
marginal effect on TPERACQ (M= .56
vs.
M=
.55),
F(l, 2154)
= 2.77,
p <
.10.
For the
25-s condition, there was
an
effect
on
ACQ
(M
= 40.81 vs.
M =
36.46), F(l, 2154)
=
16.41,
p<
.001,
and a
marginal effect
on
TPERACQ
(M = .58
vs.
M =
.57),
F(l, 2154)
=
3.52,
p
< .07.
Analyses
of
simple effects
for the
selectivity
and
filtration
measures show effects for the 15-s level
on
PTMI
(M=
.31
vs.
M = .40), F(l, 2154) = 134.98, p < .001,
PTPROB
(M = .25
vs. M = .28), F(l, 2154) = 7.01, p < .01,
VAR-ATTRIB
(M =
.010 vs.
M=
.022), F(l, 2154)
=
130.01,
p <
.001,
and
VAR-
ALTER(JI/=.010VS.M=.013),
F(l,2154) =
14.83,
p< .001.
For the 25-s group there were effects on PTMI (M = .31
vs.
M
= .40), F(l, 2154) = 112.62, p < .001, and
VAR-ATTRIB
(M =
.005 vs.
M=
.019),
F(\,
2154)
=
115.15,
p<
.001.
There was
a marginal effect on
PTPROB
(M = .20 vs. M = .21), F(l,
2154)
=
2.72,
p <
.10,
and a
marginal effect
on
VAR-ALTER
(M
=
.008 vs.
M =
.010),
F(l,
2154)
=
3.20,
p <
.08. Thus,
there is evidence for greater selectivity under high dispersion,
but
it
is stronger
for
the 15-s condition.
There were also strong effects on PATTERN for both the 15-
s
(M = -.01 vs. M =
-.27),
F(l,
2154)
=
82.03,
p <
.001,
and
25-s
conditions
(M = .31 vs. M = .01), F(l, 2154) =
91.99,
p <
.001. Processing becomes more attribute based
with higher dispersion. The results
for
dispersion
for
the 25-s
condition are important
in
that they demonstrate adaptation
to the dispersion manipulation. Hence, the failure to adapt
to
time pressure
via
strategy change
in the
25-s condition
is not
due
to a
total failure
to
obtain adaptivity
in
that condition.
There are no effects of dispersion
on
GAIN
in
either the
15-
s
(M = .50 vs. M = .51), F(\, 2154) = .09, ns, or 25-s
conditions
(M = .70 vs. M = .71), F(l, 2154) = .06, ns.
Finally, there
are
significant dispersion
by
time pressure
in-
teractions
for
ACQ,
F(l,
2154)
=
10.36,
p <
.001, TPERACQ,
F(l, 2154) = 6.04, p < .01, and
VAR-ATTRIB,
F(l, 2154) =
10.61,
p<
.001,
for the 15-s condition; for the 25-s condition,
the only significant interaction
was for
ACQ,
F(l, 2154) =
10.80,
p<. 001.
Once again,
the
hypothesis
of
adaptivity
in
processing was
strongly supported for dispersion in probabilities. The subjects
in Experiment 2, like those
in
Experiment 1, were apparently
able to take advantage of context changes to reduce processing
effort while maintaining essentially the same level of accuracy.
Day
and
level
effects.
As
noted earlier,
the
multivariate
analysis
of
variance showed
a
significant
day by
level inter-
action. Subjects who experienced
a
moderate time constraint
on
the
first
day
acted differently than
did
those who faced
a
severe constraint
on the
first
day. An
examination
of the
results
in
Table
4
indicates little difference
in the
processing
for Group 2 (25-s time pressure for the first day) between Day
1 and Day 2. The means
for
both VAR-ATTRIB and PATTERN,
for example,
are
similar
for
each day.
A
simple explanation
is that
a
15-s time constraint
on the
second day was
not
that
severe
a
time constraint. Experience with the task
on
the first
day resulted
in
the constraint
of
15
s on the second day being
more like
a
moderate level
of
time pressure.
On the
other
hand, note that
the
second-day responses
for
Group 1
(15-s
time pressure
for the
first day)
are
intermediate between
the
first day responses
to 15-s
time pressure
and the
first
day
responses
to
25-s time pressure.
Another interesting comparison concerns the no-time-pres-
sure data
for
Groups 1
and 2 on the
first day. Consider,
for
example, the results for PATTERN. The no-time-pressure mean
for
the 15-s
condition
was
-.11.
The
mean
for the 25-s
condition
was .15.
Thus,
it
appears that there
was
some
carryover from behavior generated
in
response
to the
time-
pressure trials to the no-time-pressure
trials.
This suggests that
the degree
of
adaptivity
to
time pressure found
in
these
experiments
was not
perfect
on a
trial
by
trial basis.
The
development
of
a strategy appropriate
to a
particular level
of
time pressure apparently affected the strategy used
in
the
no-
time-pressure situations. There
is
also evidence
for
such
car-
ryover
in the
second-day responses
of
Group
1
discussed
earlier.
Discussion
The results
of
Experiment
2
support
the
findings
of
Exper-
iment
1 and
show that subjects adapt
to
changes
in
context
(dispersion)
by
exhibiting changes
in
selectivity, type
of
proc-
essing, and amount of processing while maintaining accuracy.
Subjects appear
to
adapt
to
severe time pressure
by
accelera-
tion, filtration,
and
changes
in
strategy. They
do not
appear
to change strategy
in
response
to
more moderate time pres-
sure.
The
second-day results also demonstrate some interest-
ing effects regarding
how
adaptation
to one
choice environ-
ment carries over
to
adaptation
to a
different choice environ-
ment. Table
5
summarizes the main effects
for
dispersion
and
time pressure
for
both experiments.
An Alternative Hypothesis
Although
the
pattern
of
results
for
time pressure
for
Exper-
iment
2 is
consistent with
the
findings
of
Experiment
1, an
3 A third experiment, identical
in
procedure
to
Experiment 1
but
using
a
25-s level
of
time pressure, was also conducted.
The
results
were similar to those reported in the text. Under
25
s of time pressure
there was evidence
of
acceleration
of
processing, weak support
for
the hypothesis of filtration, and no evidence of changes in the pattern
of processing. More details on that experiment are available from
the
authors.
ADAPTIVE DECISIONS549
Table 5
Summary of Main Effect Results for
Dispersion
and Time
Pressure:
Experiments
1
and 2
Dependent
measure
ACQ
TPERACQ
PTMI
PTPROB
VAR-ALTER
VAR-ATTRIB
PATTERN
GAIN
Experiment 1
***
ns
+**
ns
Dispersion
Experiment 2
TP=
15
s
***
_*
ns
TP = 25 s
_*
+
+*
***
ns
Experiment 1
ns
+**
*
Time pressure
Experiment 2
TP= 15 s
***
ns
_**
**
TP
=
25
s
***
+*
+*
ns
+*
ns
_**
Note.
The signs represent the direction of
each
effect (e.g., higher dispersion led to fewer acquisitions in Experiment
1).
TP = time pressure.
ACQ
= number of information
boxes
examined,
TPERACQ
=
time per
information
acquisition,
PTMI
= proportion of
time on the most
important
attribute, PTPROB = proportion of time on the probability information, VAR-ALTER = variance in the proportion of time spent on each
alternative, VAR-ATTRIB = variance in the proportion of
time
spent on each attribute (including both payoff and probability information).
PATTERN = index reflecting
relative
amount of attribute-based
(—)
and alternative-based
(+)
processing,
GAIN = relative accuracy of choices.
*p<.10
** p
< .05
***p<.01
alternative explanation, suggested by an anonymous reviewer,
must be examined. Suppose subjects only adjust to the dis-
persion manipulation. They might do this by first examining
the probabilities and then engaging in a mixture of attribute-
and alternative-based processing under low dispersion or
mostly attribute-based processing if dispersion is high. If one
supposes further that whatever alternative-based processing is
used tends to be greater toward the end of the choice process,
then a simple truncation of the process under time pressure
could lead to the observed results. Subjects may not change
their processing strategy but may simply use a truncated
version of the strategy under time pressure. This possibility
must be seriously considered, as prior research (e.g., Bettman
& Park, 1980) has shown that alternative-based processing
does increase relative to attribute-based processing later in the
choice process.
To examine this alternative hypothesis, we consider proc-
essing patterns early in the choice process. In particular, we
consider
the
processing occurring in the
first
eight acquisitions
of
each
time pressure trial. Eight acquisitions were selected at
the unit of analysis for several reasons. First, two distinct
processing patterns could be exhibited within eight acquisi-
tions,
namely, examination of all four probabilities and all
four values for one alternative or acquiring information on
all four probabilities and all four values for one attribute
across alternatives. Although subjects may not follow these
two patterns in pure form, eight acquisitions should allow for
any differential tendencies in starting the process to emerge.
Second, eight acquisitions is also roughly half the average
number of acquisitions for the 15-s time pressure condition.
Over
98%
of
the
trials had eight acquisitions or more.
The alternative hypothesis can be tested by comparing the
processing patterns for these first eight acquisitions for each
time pressure trial for the 15-s and 25-s time pressure condi-
tions.
If
the
alternative truncation hypothesis is correct, there
should be no difference between the 15-s and 25-s conditions
on these initial acquisitions for the time pressure trials, as the
overall differences between these conditions are hypothesized
to be due to truncation at the end of the process. If our
interpretation that subjects are using different strategies is
correct, however, there should be differences between the 15-
s and 25-s conditions at the beginning of the process.
We analyzed the data for the first eight acquisitions for each
time pressure trial from Experiment 2. The variables exam-
ined were two selectivity measures, the variances in the pro-
portion of time spent on attributes (VAR-ATTRIB) and alter-
natives (VAR-ALTER); and the relative proportion of alterna-
tive-
and attribute-based processing (PATTERN). These
variables were selected because they should provide sensitive
indices of the early processing pattern. We have hypothesized
that subjects under severe time pressure should try to do a
quick evaluation of as many alternatives as possible on a
limited number of attributes. This pattern should not char-
acterize subjects with 25-s time pressure if our hypothesis that
strategy change occurs only under severe time pressure is
correct. That implies more attribute-based processing and
greater variation in processing across attributes for the 15-s
condition. Because only the first few acquisitions are exam-
ined, this should also imply less variation across alternatives
for
the 15-s
condition, because subjects
will
not have had time
to eliminate alternatives. Rather, they may be doing an initial
screening, with the values of
all
alternatives examined for the
attribute or attributes considered. Thus, the strategy change
hypothesis predicts the foregoing differences between the 15-
s and 25-s conditions for the eight initial acquisitions per time
pressure trial, whereas a strict truncation hypothesis should
predict no differences.
The results support the strategy change hypothesis and are
not consistent with the truncation hypothesis, VAR-ATTRIB is
marginally greater for the 15-s condition than the 25-s con-
dition {M = .033 vs. M = .026), F(l, 526) = 2.84, p < .10,
and VAR-ALTER is significantly less for the 15-s condition (M
= .075
vs.
M
=
.105),
F(
1,
510) =
7.03,
p < .02. The tendency
for the 15-s condition to engage in more attribute-based
550J. PAYNE, J. BETTMAN, AND E. JOHNSON
processing (more negative values of PATTERN) is also margin-
ally significant (M = -.33 vs. M = .01), F(l, 474) = 3.35, p
< .08. Although these results do not all reach conventional
.05 levels of significance, they are all directionally consistent
with the strategy change hypothesis rather than truncation. In
addition, there were no significant differences on these vari-
ables for the no-time-pressure trials between the 15-s and 25-
s conditions,
F(
1,
526) = 1.31,
F(
1,
473) =
.
18,
and
F(1,
382)
= .59 for VAR-ATTRIB, VAR-ALTER, and PATTERN, respectively.
This supports the notion that different strategies are adaptive
responses to different levels of time pressure, not a general
tendency of the different subject groups.
To provide further insights into the pattern of processing
over the course of a decision, the responses of subjects were
compared for the first eight acquisitions and last eight acqui-
sitions of
all
trials on PATTERN and PTPROB. Consistent with
prior research (Bettman & Park, 1980), there was more attrib-
ute-based processing for the earlier acquisitions than for the
later acquisitions, both for the 15-s condition (M = -.35 vs.
-.05),
F(l, 1137) = 46.86, p < .001, and the 25-s condition
(M = -.08 vs. .26), F(\, 873) = 51.94, p < .001. There was
also a greater initial focus on probabilities for both the 15-s
(M= .57 vs. M =.11), F(l, 1264)= 1837.04,/? < .001, and
25-s conditions (M = .58 vs. M = .08), F(l, 944) = 2576.23,
p < .001. The PTPROB means were essentially the same for
both the 15-s and 25-s conditions. Thus, the 15-s and 25-s
conditions exhibit different responses to time pressure from
the beginning. Even though processing becomes relatively
more alternative based over the course of a trial, the difference
between the two conditions remains.
General Discussion
Previous research has shown that the same individual will
often use diverse strategies to make a decision, contingent on
task demands (Payne, 1982). A major problem for current
cognitive research is to be able to better understand and
predict when a particular strategy will be used.
This article has examined effort and accuracy considera-
tions in the selection of strategies for making a choice. The
general hypothesis is that selection among strategies is adap-
tive,
in that a decision maker will choose strategies that are
relatively efficient in terms of effort and accuracy as task and
context demands are varied. The article first outlined an
approach to modeling the impact of task and context variables
on decision strategies by using elementary information proc-
esses to measure effort and computer simulation models to
examine accuracy and effort trade-offs. A Monte-Carlo sim-
ulation examined the impact of variation in the presence or
absence of time pressure, dispersion in probabilities, presence
or absence of dominated alternatives, and different problem
sizes on the accuracy and effort of a variety of choice heuris-
tics.
Strategies were identified that approximate the accuracy
of normative procedures while requiring substantially less
effort. However, no single heuristic did well across all task
and context conditions.
A
decision maker striving to maintain
a high level of accuracy with a minimum of effort would have
to use a variety of heuristics adaptively. Of particular interest
was the finding that under time constraints, several attribute-
based heuristics (e.g.,
EBA
and
LEX)
were more accurate than
a normative procedure such as expected value maximization,
because that procedure had to be truncated when it ran out
of time.
The simulation does not really answer the question of
how
a strategy is selected, however. The implicit viewpoint in our
work is that a decision maker possesses a repertoire of well-
defined strategies and selects among them when faced with a
decision by considering the expected costs and expected ben-
efits of each strategy. This top-down view of strategy selection
is consistent with previous models like that of Beach and
Mitchell (1978). Alternatively, strategies may develop during
the course of solving a decision problem in a more bottom-
up,
constructive, and ad hoc fashion (Bettman, 1979).
Throughout a choice episode a decision maker will be alert
to structure in the choice set that can be exploited to reduce
effort and perhaps increase accuracy. Regardless of
how
strat-
egy selection is controlled, the simulation results do suggest
how certain context and task variables affect the relative effort
and accuracy of possible strategies.
Adaptivity to
Decision
Environments
Experiments 1 and 2 tested the degree of correspondence
between the efficient processing strategies for a given decision
problem identified by the simulations and the actual infor-
mation processing behavior exhibited by people. The results
for actual decision behavior tended to validate the patterns
predicted by the simulation.
More specifically, subjects generally acquired less informa-
tion, spent less time per acquisition, spent proportionately
more time on the most important attribute, displayed greater
variance in the proportion of time spent on the various
alternatives and attributes, and used more attribute-based
processing when dispersion in the weights (probabilities), a
context variable, was high rather than low. The effects of
dispersion on the proportion of time spent on probabilities
were not consistent across studies, although more time was
spent on probabilities under high dispersion in the majority
of cases. Such adaptivity in strategy usage in response to a
context variable demonstrates that people are sensitive to a
change in the task environment that potentially impacts the
relative accuracy of heuristics as well as affecting relative
effort.
In addition, several effects of time pressure were demon-
strated. Under moderate time pressure, subjects were shown
to accelerate their processing. There was some evidence, al-
though weaker, that subjects selectively focus on a subset of
the available information. Under severe time pressure, people
accelerated their processing, focused on a subset of the infor-
mation, and changed their information processing strategies.
There was more attribute-based processing and more variance
in the proportion of time spent on various attributes as time
pressure increased. Counter to predictions, there were not
systematic effects of time pressure on the variance in the
proportion of time spent on various alternatives, implying
that subjects were equally consistent in the proportion of
information searched across alternatives, regardless of time
pressure. One possible explanation is that the nature of the
ADAPTIVE DECISIONS551
display may have made complete scans
of
an attribute rela-
tively easy.
Across the two experiments, the relative amount of dimen-
sional processing (PATTERN) was
an
average
of
41%
greater
under time pressure
of
15
s
compared with no time pressure.
In contrast,
the
relative amount
of
dimensional processing
was 20% less under time pressure
of
25
s
compared with
no
time pressure.
The
variance
in
processing across attributes
(VAR-ATTRIB)
was increased by 40% on average for the 15-s
time pressure conditions versus
no
time pressure; however,
the average increase
was
21%
for the 25-s
time-pressure
conditions.
There
are
several important aspects
of
these time-pressure
results. First, they provide
a
strong demonstration
of the
adaptivity
of
processing strategies
to
time pressure. Second,
the results
of the
experiments imply that there
may be a
hierarchy
of
responses
to
time pressure. People
may
first
attempt
to
simply accelerate their processing
and try to do
the same things faster.
If
the time pressure
is too
great
for
acceleration
to
suffice, individuals may next engage
in
filtra-
tion, focusing
on a
subset
of
the available information.
Fi-
nally, people may change strategies when time pressures be-
come extreme. Of
course,
the specific strategies found
in our
studies may
be a
function
of
the problem format used.
Dif-
ferent adaptive strategies would presumably
be
found
for
different task structures.
Learning Effort-Accuracy Trade-offs
The evidence for processing changes reported in the present
experiments suggests that people were learning
to
adapt their
behavior
to
changes
in
task
and
context.
Yet
none
of the
experiments provided
the
subjects with explicit accuracy
or
outcome feedback. Johnson
and
Payne (1985) argued that
a
decision maker has access
to a
fairly rich data base about
the
course of his
or
her own decision processes. They hypothesize
that this process feedback could provide
the
information
necessary
for
strategy change. For example,
a
decision maker
might induce
the
LEX
rule
by
first noticing that certain
out-
comes seem much more probable than others (Klein,
1983,
reports data supporting this kind
of
learning about
the
task).
Next, the decision maker might evaluate
a
strategy that takes
advantage of
the
features of
the
task by checking whether the
outcomes
are
consistent with several simple principles
of
choice. For instance, the decision maker might check that the
new strategy does
not
select dominated alternatives, and that
it selects alternatives that have satisfactory levels
of
other
outcomes.
General knowledge
of
what makes
for a
good decision
process may also play a role in learning to adapt. For instance,
the idea that
a
good decision requires considering all relevant
information
is
likely
to be
held
by
many people,
as is the
notion that
a
good process will examine
the
most important
information.4 Consequently, when faced with
a
decision task
in which
it is
impossible
or
very difficult
to
process
all
information,
the
decision maker might
use the
information
he or she has gained about the task to decide what information
is
less
important and can be ignored. An example of such task
information is that some probabilities are much smaller than
others under high-dispersion conditions.
The data presented
in
this article demonstrate that people
shift decision strategies
in
response
to a
context change
in
ways that maintain accuracy, without explicit outcome feed-
back. Reder (1987)
has
also found evidence
of
strategy
changes
in a
question answering task without outcome feed-
back. She suggests several ideas regarding the mechanisms
of
adaptive strategy selection, such
as a
"feeling-of-knowing"
process, that may relate to the "level of confidence" discussed
by Busemeyer (1985). People may seek
to
develop strategies
that take advantage
of
problem structure
so as to
minimize
effort while maintaining
a
feeling
of
knowing
or
desired level
of confidence that they are making
a
reasonable decision.
The present results, taken
as a
whole, provide strong
evi-
dence
for
adaptivity
in
decision making, although the degree
of adaptivity was
not
perfect. There
did
appear
to be
some
carry-over effects
in
terms
of
processing strategies from trial
to trial and from day
to
day. Despite these carry-over effects,
however, individuals did change information processing strat-
egies depending upon
the
changing structure
of the
choice
environment from problem
to
problem. This variability
in
processing from one problem to the next implies that humans
possess abilities
for
assessing choice environment properties;
characterizing such abilities would be
a
fruitful area for study.
It is likely that certain environmental properties may be more
easily noticed,
and
hence more adapted to, than others.
For
example,
it may be
difficult
for
people
to
notice attribute
intercorrelations (Crocker, 1981).
Finally, the evidence for adaptive use of heuristics obtained
in this study suggests
a
picture
of
the human decision maker
that
is
fairly optimistic
in
terms
of
rational behavior. People
clearly
do use
choice heuristics that lead
to
violations
of
certain principles
of
rationality (Tversky, 1969).
The use of
heuristic processes that lead
to
decision errors may reflect
a
trade-off
of
effort
and
accuracy,
or
reflect
the
fact that
the
decision maker has no other choice in some decision environ-
ments than
the use of a
heuristic (Simon, 1981). However,
our results suggest that people can adaptively change process-
ing strategies
in
ways that
are
appropriate given somewhat
subtle changes
in the
structure
of
the decision problems they
face.
4 As part of the debriefing process
for
Experiment 2, subjects were
asked what strategy they would advocate to identify the "best" choice
under
no
time pressure.
The use of all
information, including
a
weighting
of
payoffs
by
probabilities, was identified
by
many
of
the
subjects. For time pressure, the subjects indicated that use of as much
of the most important information
as
possible was
a
major consid-
eration.
References
Abelson,
R. P., &
Levi,
A.
(1985). Decision making
and
decision
theory.
In
G. Lindzey & E. Aronson
(Eds.),
The
handbook
of social
psychology
(Vol.
1, pp. 231-309). New York: Random House.
Anzai, Y., & Simon,
H.
A. (1979). The theory
of
learning by doing.
Psychological
Review,
86, 124-140.
Beach,
L. R.
(1983). Muddling through:
A
response
to
Yates
and
552J. PAYNE, J. BETTMAN, AND E. JOHNSON
Goldstein.
Organizational Behavior
and Human
Performance,
31,
47-53.
Beach, L. R., & Mitchell, T. R. (1978). A contingency model for the
selection of decision strategies. Academy of Management Review,
3,
439-449.
Ben Zur, H., & Breznitz, S. J. (1981). The effects of time pressure on
risky choice behavior. Ada
Psychologica,
47, 89-104.
Bettman, J. R. (1979). An
information processing theory
of consumer
choice.
Reading, MA: Addison-Wesley.
Bettman, J. R., Johnson, E. J., & Payne, J. W. (1987).
Cognitive effort
and
decision
making
strategies:
A
componential analysis
of choice.
Unpublished manuscript, Center for Decision Studies, Fuqua
School of
Business,
Duke University.
Bettman, J. R., & Park, C. W. (1980). Effects of prior knowledge and
experience and phase of the choice process on consumer decision
processes: A protocol analysis. Journal of
Consumer
Research,
7,
234-249.
Busemeyer, J. R. (1985). Decision making under uncertainty: A
comparison of simple scalability, fixed-sample, and sequential-
sampling models. Journal of Experimental
Psychology:
Learning,
Memory, and
Cognition,
11, 538-564.
Card, S. K., Moran, T. P., & Newell, A. (1983). The
psychology
of
human-computer
interaction.
Hillsdale, NJ: Erlbaum.
Crocker, J. (1981). Judgment of covariation by social perceivers.
Psychological
Bulletin,
90, 272-292.
Einhorn, H. (1980). Learning from experience and suboptimal rules
in decision making. In T. S. Wallsten (Ed.),
Cognitive processes
in
choice
and
decision behavior
(pp.
1-20).
Hillsdale, NJ: Erlbaum.
Hammond, K. R. (1986). A
theoretically
based
review
of theory and
research in judgment and decision
making.
(Report No. 260).
Boulder, CO: Center for Research on Judgment and Policy, Insti-
tute of Cognitive Science, University of Colorado.
Huber, J., Payne, J. W., & Puto, C. (1982). Adding asymmetrically
dominated alternatives: Violations of regularity and the similarity
hypothesis. Journal of
Consumer
Research,
9, 9-98.
Huber, O. (1980). The influence of some task variables on cognitive
operations in an information-processing decision model. Acta
Psy-
chologica,
45, 187-196.
Johnson, E. J. (1979). Deciding how to
decide:
The
effort
of making
a
decision.
Unpublished manuscript, University of Chicago.
Johnson, E. J., & Payne, J. W. (1985). Effort and accuracy in choice.
Management
Science,
31, 395-414.
Johnson, E. J., Payne, J. W.,
&
Bettman, J. R. (in press). Information
displays and preference reversals. Organizational Behavior and
Human
Decision
Processes.
Johnson, E. J., Payne, J. W., Schkade, D. A.,
&
Bettman, J. R. (1986).
Monitoring information processing and
decisions:
The mouselab
system.
Unpublished manuscript, Center for Decision Studies,
Fuqua School of
Business,
Duke University.
Keeney, R. L.,
&
Raiffa, H. (1976).
Decisions with multiple
objectives:
Preferences
and
value
tradeoffs.
New York: Wiley.
Klayman, J. (1983). Analysis of predecisional information search
patterns. In P. C. Humphreys, O. Svenson, & A. Vari (Eds.),
Analyzing and aiding
decision processes
(pp. 401-414). Amster-
dam: North Holland.
Klein, N. M. (1983). Utility and decision strategies: A second look at
the rational decision maker.
Organizational Behavioral
and Hu-
man
Performance,
31, 1-25.
March, J. G. (1978). Bounded rationality, ambiguity, and the engi-
neering of
choice.
Bell Journal
of
Economics,
9, 587-608.
McClelland, G. H. (1978). Equal versus differential
weighting
for
multiattribute
decisions.
Unpublished manuscript, University of
Colorado, Boulder.
Miller, J. G. (1960). Information input overload and psychopathol-
ogy. American Journal of
Psychiatry,
116,
695-704.
Newell, A., & Simon, H. A. (1972). Human
problem
solving.
Engle-
wood Cliffs, NJ: Prentice Hall.
Payne, J. W. (1976). Task complexity and contingent processing in
decision making: An information search and protocol analysis.
Organizational Behavior
and Human
Performance,
16, 366-387.
Payne, J. W. (1982). Contingent decision behavior.
Psychological
Bulletin, 92, 382-402.
Reder, L. M. (1987). Strategy selection in question answering. Cog-
nitive
Psychology,
19, 90-138.
Russo, J. E., & Dosher, B. A. (1983). Strategies for multiattribute
binary choice. Journal of Experimental
Psychology:
Learning,
Memory, and
Cognition,
9, 676-696.
Simon, H. A.
(1955).
A
behavioral model of
rational
choice.
Quarterly
Journal
of
Economics,
69, 99-118.
Simon, H. A. (1981). The
sciences
of the
artificial
(2nd.
ed). Cam-
bridge, MA: MIT Press.
Thorngate, W. (1980). Efficient decision heuristics.
Behavioral
Sci-
ence,
25, 219-225.
Tversky, A. (1969). Intransitivity of preferences.
Psychological
Re-
view,
76,
31-48.
Tversky, A. (1972). Elimination by aspects: A theory of choice.
Psychological
Review,
79, 281-299.
Wright, P. L. (1974). The harassed decision maker: Time pressures,
distraction, and the use of evidence.
Journal
of Applied Psychology,
59,555-561.
Zakay, D., & Wooler, S. (1984). Time pressure, training and decision
effectiveness.
Ergonomics,
27, 273-284.
ReceivedJune 16, 1986
Revision received September 14, 1987
Accepted September 14, 1987
... Based on a large number of studies about attribute decisionmaking (Payne et al., 1988;Tversky, 1972) and differentiation thinking strategy (Bordalo et al., 2012;Tversky & Kahneman, 1981), the family of attribute comparison models was conceived in response to anomalies that cannot be explained by discount models, such as subadditivity and superadditivity. Inspired by bounded rationality, researchers considered people's cognitive resources as limited and noted that people scarcely made decisions according to time discount accurately. ...
Article
Full-text available
Intertemporal choice, a pervasive phenomenon in social life, involves evaluating and balancing gains and losses occurring at different times. Research into the mechanisms underlying intertemporal choice has yielded varied findings. Time discounting models propose an alternative-wise strategy, where decisions are driven by the discounting of future rewards. In contrast, attribute comparison models advocate an attribute-wise strategy, emphasizing the comparison of specific attributes across different options. Therefore, we adopted a comparative paradigm combining evidence from both process and outcome using the MouselabWEB and the eye-tracking techniques in three experiments. In Experiment 1 (N = 37), the mouselabWEB process data indicates that intertemporal choice processes are more similar to those on the time discount task. These results were supported in Experiment 2 (N = 37), in which eye-tracking data was analyzed. The drift-diffusion model analysis suggested that the diffusion velocity under the free answer strategy condition was significantly lower than in the attribute comparison strategy condition. In Experiment 3 (N = 35), participants tended to use an attribute-wise strategy and chose more delayed options when the choices were presented with a date rather than a number of days. In conclusion, evidence from both the processes and the outcomes supported the time discount model explanation of intertemporal choice. Moreover, the findings indicated that people could be nudged into using an attribute-wise strategy with a slight change in the time representation.
Article
Purpose Central bank digital currencies (CBDCs) play a critical role in driving national digital transformation. While China’s CBDC, the electronic Chinese yuan (e-CNY), has been launched in several pilot cities, reports indicate low user stickiness. To address this issue, we applied behavioral reasoning theory (BRT) to develop a model and hypotheses. This model explores how personal values (such as openness to change), reasons for adoption (RFs: relative advantage, compatibility and perceived security) and reasons against adoption (RAs: usage barriers, inertia and a lack of usage scenarios) influence user attitudes and their stickiness to e-CNY. Design/methodology/approach A mixed-methods approach was employed, starting with qualitative research through semistructured interviews to identify key reasons for (RFs) and reasons against (RAs) e-CNY adoption. This was followed by a quantitative survey of 601 respondents, with data analyzed via partial least squares to test the proposed hypotheses. Findings Our findings indicate that openness to change increases RFs and reduces RAs. RFs positively influence user attitudes toward e-CNY, whereas RAs negatively impact these attitudes. Both RFs and RAs directly impact user stickiness, and they also influence it indirectly through user attitudes. Additionally, RFs and RAs partially mediate the relationship between openness to change and both user attitudes and user stickiness to e-CNY. Originality/value This study provides a deeper understanding of e-CNY stickiness by emphasizing the role of personal values (openness to change) and adoption-related factors in shaping user attitudes and stickiness. The findings provide practical insights for the government and businesses to collaborate in enhancing users’ stickiness to e-CNY.
Article
Full-text available
Pre- and postdecision processes were studied in triads of participants, dyads of participants, and for individual decision-makers in two experiment (N = 57 and 50). Participants were students volunteering to take part in the study (21 men and 36 women with a mean age of 24 yr. and 25 men and 25 women with a mean age of 27 yr.). The purpose was to examine how much interactive versus individual social interaction (consultation with others before making a personal decision) affects post-decision consolidation. Predecision differentiation and postdecision consolidation have been defined as attractiveness changes over time in favour of the chosen alternative. Participants were coded into three categories (noncompensatory, compensatory, and nonclassified) according to their different decision strategies. For Exp. 1, postdecision consolidation effects were significant for participants who used a noncompensatory (no comparisons across attributes) decision strategy. For Exp. 2, postdecision consolidation effects were significant for participants who made a personal decision but not those who made an interactive decision. The differing results in these two experiments were discussed.
Article
Despite the increasing prevalence of the use of data analytics and visualizations in accounting settings, little is known about the relative effectiveness of different visualization formats for tasks such as those involving comparisons over time and those involving evaluations of whole-part compositions. In this study, we examine if there is a difference in decision efficiency and effectiveness between bar and line graphs for a comparison task and between pie and stacked column graphs for a compositional task. We also examine if the component complexity of graphs affects performance. We find that line graphs result in significantly greater decision sensitivity (ability to identify true positives) than bar graphs on comparison tasks regardless of the degree of component complexity. We also find that bar graphs result in significantly greater decision specificity (ability to identify true negatives) and overall accuracy.
Article
Buy-now-pay-later, a rapidly growing payment mode, facilitates short-term deferral of payment by several installments without interest or fees. Although consumers and payment providers claim that buy-now-pay-later influences spending, existing research does not fully explain how or why. Purchase transaction data and a series of experiments demonstrate greater consumer spending with buy-now-pay-later compared to other payment modes. This research contributes an underlying process that explains how and why buy-now-pay-later increases consumer spending. The presentation of installment prices (i.e., the amount paid per installment) with buy-now-pay-later lowers consumers’ perception of purchase expensiveness, which increases spending. However, presenting installment prices does not affect spending with other payment modes. Furthermore, the number of installments, the magnitude of the first installment, and the presence of the installment price moderate the effect of buy-now-pay-later, demonstrating how installment prices affect consumer spending. Taken together, the findings provide opportunities for retailers to increase consumers’ spending and actionable insights for policymakers to protect consumers.
Article
The absence of an incentive‐compatible mechanism to reveal consumers' true willingness to pay in stated preference elicitation methods and the consequent hypothetical bias are an important concern in discrete choice experiments. Our study extends this discourse on hypothetical bias by examining how it varies with the heterogeneity in respondents' cognitive ability and familiarity with a good. This paper also adds to our understanding of the demand for agricultural insurance in developing countries by studying the willingness to pay for one of the world's largest agricultural insurance programs using a large state representative sample. Following a between‐subject design, we implemented a large scale randomized incentivized choice experiment and hypothetical choice experiment with real farmers who make decisions on the purchase of insurance. We find that demand for an insurance product is shaped by the subject's familiarity with and cognitive ability to understand the product. We show that the magnitude of the hypothetical bias is higher at a lower level of cognitive ability and that bias diminishes with an increase in cognitive ability. Finally, we examine key heterogeneity and test a number of possible mechanisms.
Article
Full-text available
Inferential decision-making algorithms typically assume that an underlying probabilistic model of decision alternatives and outcomes may be learned a priori or online. Furthermore, when applied to robots in real-world settings they often perform unsatisfactorily or fail to accomplish the necessary tasks because this assumption is violated and/or because they experience unanticipated external pressures and constraints. Cognitive studies presented in this and other papers show that humans cope with complex and unknown settings by modulating between near-optimal and satisficing solutions, including heuristics, by leveraging information value of available environmental cues that are possibly redundant. Using the benchmark inferential decision problem known as “treasure hunt”, this paper develops a general approach for investigating and modeling active perception solutions under pressure. By simulating treasure hunt problems in virtual worlds, our approach learns generalizable strategies from high performers that, when applied to robots, allow them to modulate between optimal and heuristic solutions on the basis of external pressures and probabilistic models, if and when available. The result is a suite of active perception algorithms for camera-equipped robots that outperform treasure-hunt solutions obtained via cell decomposition, information roadmap, and information potential algorithms, in both high-fidelity numerical simulations and physical experiments. The effectiveness of the new active perception strategies is demonstrated under a broad range of unanticipated conditions that cause existing algorithms to fail to complete the search for treasures, such as unmodelled time constraints, resource constraints, and adverse weather (fog).
Article
Full-text available
Based on eye-fixation patterns, strategies for multiattribute binary choice were classified as holistic (within an alternative) or dimensional (within an attribute across alternatives). In a task environment hospitable to both strategies, dimensional processing predominated. Even for alternatives like simple gambles, which require holistic computations, dimensional strategies were used as often as holistic ones. The dimensional strategies were augmented by two procedures that simplify the computations. These simplification procedures reduce cognitive effort at the cost of a relatively small increase in errors. However, for about half the subjects the use of these simplification procedures led to systematic violations of expected utility theory on certain choices. Both the preference for dimensional over holistic strategies and the adoption of simplifying procedures are compatible with the desire to reduce cognitive effort. We propose that strategies are selected to minimize the joint cost of errors and effort. This article reports a detailed empirical investigation of the information-processing strategies used in multiattribute binary choice. The goal was to use two process-tracing techniques , eye fixations and prompted verbal protocols, to identify the strategies subjects use for choosing between two multidimen-sional alternatives. We were able to identify and measure two main classes of strategies: dimensional and holistic. We further identified two distinct procedures designed to simplify those strategies. In contrast to traditional studies of binary choice, this research focuses on observable strategies rather than algebraic models and on detailed process tracing rather than choice proportions alone. The following choice between two multiat-tribute alternatives typifies those we consider. The members of a faculty committee can award scholarship aid to one of two entering freshmen. They must decide which is the more for their comments on earlier versions of this article; John Conery for permission to report unpublished data; and Larry Rosen and John Tuohy for valuable advice on many aspects of this project. Requests for reprints should be sent to J.
Article
Full-text available
Recent work on parameter insensitivity in linear models (equal weighting arguments) is examined for its implications for multiattribute decision making. The key factor for the case equal weighting. It is argued that the non-negative attribute intercorrelations upon which the case for equal weighting of attributes is based does not generally hold for multiattribute decisions because the very tradeoffs which create the decision problem imply negative intercorrelations. After an examination of the likely values of such negative intercorrelations, the effects of using incorrect weights in multiattribute decision making on the correlation between true and estimated evaluations and on the expected utility loss due to use of incorrect weights are evaluated. Implications of the theoretical results are discussed in relation to the precision with which multiattribute methods must assess the weights. Suggestions are included for how to use the theorems in this paper for determining the required accuracy of weight estimation for any gived applied problem
Article
Full-text available
In order to better understand the cognitive processes underlying judgment and choice, decision researchers have begun to use a variety of process tracing techniques. The idea is to complement many traditional measures of judgment and choice with a high density of observations on the intermediate stages of processing. This report documents a procedure for monitoring the information acquisition stages of decision behavior. The procedure is based on a micro-computer controlled pointing device called a mouse. The procedure offers a number of flexible graphics and data recording routines. The relationship of the procedure to other process tracing techniques is discussed.
Article
A model hypothesizes why decision makers choose different decision strategies in dealing with different decision problems. Given the motivation to choose the strategy which requires the least investment for a satisfactory solution, the choice of strategy should depend upon the type of problem, the surrounding environment, and the personal characteristics of the decision maker.
Article
Thirty six subjects chose individually between pairs of gambles under three time pressure conditions: High (8 seconds), Medium (16 seconds) and Low (32 seconds). The gambles in each pair were equated for expected value but differed in variance, amounts to win and lose and their respective probabilities. Information about each dimension could be obtained by the subject sequentially according to his preference.The results show that subjects are less risky under High as compared to Medium and Low time pressure, risk taking being measured by choices of gambles with lower variance or lower amounts to lose and win. Subjects tended to spend more time observing the negative dimensions (amount to lose and probability of losing), whereas under low time pressure they preffered observing their positive counterparts. Information preference was found to be related to choices.Filtration of information and acceleration of its processing appear to be the strategies of coping with time pressure.
Article
Barriers to unification lie in the false dichotomy and rivalry between intuition and analysis, the arbitrary choice of task conditions, and the absence of a theory of successful intuition, as well as in current research practices. A theoretical framework is presented that is intended to overcome these barriers. The theory is anchored in task conditions, specifies the variety of cognitive properties they induce, and indicates subsequent behavior. Keywords: Intuition/analysis; Cognitive continuum theory; Generality of experiemental results; Judgement and decision making; Experimental design; Task characteristics.