ArticlePDF Available

Abstract and Figures

Seeking advice is a basic practice in making real life decisions. Until recently, however, little attention has been given to it in either empirical studies or theories of decision making. The studies reported here investigate the influence of advice on judgment and the consequences of advice use for judgment accuracy. Respondents were asked to provide final judgments on the basis of their initial opinions and advice presented to them. The respondents’ weighting policies were inferred. Analysis of the these policies show that (a) the respondents tended to place a higher weight on their own opinion than on the advisor’s opinion (the self/other effect); (b) more knowledgeable individuals discounted the advice more; (c) the weight of advice decreased as its distance from the initial opinion increased; and (d) the use of advice improved accuracy significantly, though not optimally. A theoretical framework is introduced which draws in part on insights from the study of attitude change to explain the influence of advice. Finally the usefulness of advice for improving judgment accuracy is considered.
Content may be subject to copyright.
Receiving other peopleÕs advice: Influence and benefit
q
Ilan Yaniv
Department of Psychology, Hebrew University of Jerusalem, Mt. Scopus, Jerusalem 91905, Israel
Abstract
Seeking advice is a basic practice in making real life decisions. Until recently, however, little attention has been given to it in
either empirical studies or theories of decision making. The studies reported here investigate the influence of advice on judgment and
the consequences of advice use for judgment accuracy. Respondents were asked to provide final judgments on the basis of their
initial opinions and advice presented to them. The respondentsÕweighting policies were inferred. Analysis of the these policies show
that (a) the respondents tended to place a higher weight on their own opinion than on the advisorÕs opinion (the self/other effect); (b)
more knowledgeable individuals discounted the advice more; (c) the weight of advice decreased as its distance from the initial
opinion increased; and (d) the use of advice improved accuracy significantly, though not optimally. A theoretical framework is
introduced which draws in part on insights from the study of attitude change to explain the influence of advice. Finally the use-
fulness of advice for improving judgment accuracy is considered.
Ó2003 Elsevier Inc. All rights reserved.
We are usually convinced more easily by reasons we have found
ourselves than by those which have occurred to others. – Blaise
Pascal
The use of advice is a fundamental practice in making
real-life decisions, whether as basic as finding directions
in an unfamiliar environment or as complex as those
involving legal or medical issues. However, until recently
the use of advice has been given little consideration in
either empirical studies or theories of decision making
(Harvey & Fischer, 1997; Jonas & Frey, 2003; Junger-
mann, 1997; Sniezek & Buckley, 1995; Yaniv & Klein-
berger, 2000). Advice seeking is important because real
decision problems generally do not come as completely
packaged, self-contained ‘‘textbook problems.’’ Hence
people engage in interactive social and cognitive pro-
cesses of giving and taking advice to enhance their
representation of a decision problem (Yates, Price, Lee,
& Ramirez, 1996; Zarnoth & Sniezek, 1997). In partic-
ular, they solicit opinions from worthy advisors, assess
their merit, and then combine them. An advisor might
fill in missing information, help assess the values of al-
ternative options, or serve as a ‘‘sounding board.’’ In
sum, it appears that the use of advice plays a far greater
role in the practice of real life decision making than it
has had in decision research.
A major motivation for seeking advice is the need to
improve judgment accuracy and the expectation that
advice will help. An abundance of studies have shown
that combining multiple sources of information im-
proves estimation in the long run, in a variety of do-
mains ranging from perceptual judgment to business
forecasting (e.g., Armstrong, 2001; Sorkin, Hayes, &
West, 2001; Yaniv, 1997). Aside from accuracy, there
are also social reasons for seeking advice, which we
consider only briefly here. Accountants performing
complex audit tasks tend to solicit advice for self-pre-
sentational reasons and to increase the justification for
their decisions (Kennedy, Kleinmuntz, & Peecher, 1997).
Indeed, seeking advice implies sharing with others the
responsibility for the outcome of a decision (Harvey &
Fischer, 1997). One might argue, however, that even
self-presentational reasons for seeking advice are rooted
in the belief on the part of the individual or the orga-
nization that consulting someone elseÕs opinion could
improve oneÕs final decision.
Whereas advising per se has received little attention in
the study of decision making, several important lines of
q
This research was supported by Grant No. 822/00 from the Israel
Science Foundation. The author is a member of the Department of
Psychology and of the Center for the Study of Rationality, Hebrew
University of Jerusalem.
E-mail address: ilan.yaniv@huji.ac.il.
0749-5978/$ - see front matter Ó2003 Elsevier Inc. All rights reserved.
doi:10.1016/j.obhdp.2003.08.002
Organizational Behavior and Human Decision Processes 93 (2004) 1–13
ORGANIZATIONAL
BEHAVIOR
AND HUMAN
DECISION PROCESSES
www.elsevier.com/locate/obhdp
research form the basis for the present investigation.
These include theories in the following domains: (a)
processes of attitude change, belief revision and perse-
verance (Zimbardo & Leippe, 1991), (b) the literature on
combining expert opinions and linear models of judg-
ment (Armstrong, 2001; Blattberg & Hoch, 1990), (c)
models of information integration (Anderson, 1968),
and (d) interactive group judgment (Davis et al., 1997;
Sniezek & Henry, 1989). Research in these areas high-
lights the processes by which information is combined
and opinions are revised.
The focus of the present research is on two aspects of
advice seeking–how the advice is used and whether there
is a resulting gain in accuracy. In these studies we con-
sider perhaps the simplest form of advice use, namely
getting a piece of information (numerical estimate) from
an outside party and using it to update oneÕs own view.
As simple as it is, numerical advice has an important
function in individual as well as organizational deci-
sions. Physicians, weather forecasters, genetic consul-
tants and lawyers, just to name a few, are all in the
business of communicating their forecasts and uncertain
estimates to others facing decisions. In a different vein,
the use of numerical estimates has certain methodolog-
ical advantages, primarily the ability to measure
straightforwardly respondentsÕweighting policies and
accuracy gains.
Policies for using advice
A basic dilemma in using advice involves the amount
of weight to place on othersÕopinions. Receiving advice
often exposes decision makers to a potential conflict
between their initial opinions and the advice. Consider a
manager who believes that a certain new product is
likely to gain success and is thus worthy of further de-
velopment. The manager then receives a lukewarm ex-
pert opinion of her idea. How might she revise her
opinion? The key question in many practical situations
is to decide just how much weight ought to be assigned
to a particular piece of advice. In particular, a decision
makerÕs weighting policy might entail completely ig-
noring the other opinion, some adjustment of oneÕs own
opinion towards the other, or complete adoption of the
other opinion.
The studies presented here investigate how people
weight othersÕopinions and how this weighting policy
changes as a function of knowledge and of the distance
of the advice from the decision makerÕs own opinion.
Finally, the consequences of such policies for judgment
accuracy are considered.
In order to develop hypotheses about the policies that
decision makers use for integrating advice, I made use of
an analogy between advice use and attitude change. The
process of weighting advice in judgment may resemble
the processes underlying opinion change as a function of
communication. To be sure, research in these two areas
arises from different conceptual perspectives. Studies of
judgment typically ask how good a personÕs judgment is
in terms of its accuracy or coherence. Studies of atti-
tudes are typically focused on the valence (e.g., positive
vs negative) and strength of the personÕs attitude, with
the goal of understanding what affects them (Ajzen,
2001). Moreover, in attitude change the main perspec-
tive is that of the communicator, who seeks to influence
or persuade target recipients (Zimbardo & Leippe,
1991). In advice seeking, the recipient often initiates the
process in attempt to improve the quality of her judg-
ment. The goal of influence promotion is manipulative
that is, bringing about change in some preferred
directionwhereas a major goal in seeking advice is
improving decision quality.
Despite these differences, it is not inconceivable that
advice use and attitude change share certain common-
alities. In both cases oneÕs initial opinion is integrated
with that of someone else, be it a communicatorÕs in-
fluential message or an advisorÕs opinion. I pursue the
merits (as well as the limits) of this analogy in sub-
sequent sections and the final discussion. Drawing on
this analogy, I outline two hypotheses which involve the
mechanisms that underlie discounting and the effect of
distance. Both reflect the manner in which judges resolve
the conflict between their initial opinions and the advice.
I also consider the consequences of advice use for
accuracy.
The self/other effect: Discounting the weight of advice
Previous work on the use of advice in decision mak-
ing suggests a self/other effect whereby individuals tend
to discount advice and favor their own opinion. In a
judgmental estimation task (Yaniv & Kleinberger, 2000)
respondents formed a final opinion on the basis of their
initial opinion and a piece of advice. Rather than using
equal weighting, respondents tended to place a higher
weight on their own opinion than on the advisorÕs
opinion. Even though the decision makers were sensitive
to the quality of the advice (good vs poor), they tended
to discount both good and poor advice. In a cue-learn-
ing study by Harvey and Fischer (1997), respondents
shifted their estimates about 20–30% towards the advi-
sorÕs estimates. Lim and OÕConnor (1995) found that, in
combining their prior personal forecasts and advisory
(statistical) forecasts, judges weighted their own fore-
casts more heavily than the statistical forecasts.
I suggest that these discounting phenomena result
from the nature of the support the judge can recruit for
her own opinion versus the advice. In particular, the
self/other effect may arise from an informational asym-
metry inherent in any decision-making process that in-
volves the use of advice. Individuals are privy to their
2I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13
own thoughts, but not to the thoughts underlying the
advisorÕs opinion. A judge can access pieces of evidence
supporting his/her own opinion more easily than ones
supporting the advisorÕs view. If the weighting of opin-
ions is a function of the accessible evidence, then, other
things being equal, judges should be expected to dis-
count advice.
A related hypothesis is that the weight of advice is a
function of the judgeÕs initial knowledge or competence.
The more knowledgeable individuals are, the more evi-
dence they retrieve from memory for their own opinion
and, therefore, the higher weight they place on their own
opinion.
Distance effects
How does discounting depend on the distance of the
advice from oneÕs own opinion? To develop the relevant
hypotheses I used the aforementioned analogy between
studies of advice use and of attitude change. In both
situations individuals integrate their own prior opinion
with that of another person. Research on the effects of
influential messages on attitude change can inform us
about how advice distance affects the way messages are
weighted. Consider a practical advice-using situation in
which your initial guess is that the distance between two
places is roughly 10 miles. Then advisor A tells you she
thinks the actual distance is 15 miles, while advisor B
tells you he thinks the distance is 80 miles. The ‘‘near’’
advice might lead you to revise the initial estimate (‘‘She
says the place is somewhat further than I had initially
thought’’). The ‘‘far’’ advice, however, seems to call for
a total reconsideration of the appropriate weighting
strategy (‘‘His opinion is too far from mineeither his
or my estimate must be mistaken’’).
A basic tenet of social-cognitive psychology embed-
ded in all consistency theories is that individuals seek to
resolve discrepancies that exist among their beliefs.
Theories of attitude change, such as dissonance (Aron-
son, Turner, & Carlsmith, 1963) and social judgment
(Sherif & Hovland, 1961), predict that attitude change
should decline with distance. Suppose attitude change is
measured as a proportionthe amount of change is
expressed as a fraction of the distance between the initial
attitude and the message. Bochner and Insko (1966)
presented a persuasive message advocating that people
get Nhours of sleep per night (where Nranged in var-
ious conditions from 8 to 0 h). The respondentsÕinitial
views (in an independent sample) averaged around 7 or
8 h per night. Then, as the advocated number of hours of
sleep decreasednamely, the discrepancy increasedthe
magnitude of attitude change decreased. As the message
becomes more extreme, people begin to generate
counterarguments or disparage the source.
A related phenomenon was seen in studies of stereo-
type change (Kunda & Oleson, 1997), and conceptualized
in terms of assimilation and contrast processes (Sherif &
Hovland, 1961). While a slightly deviant opinion could be
assimilated and thus cause a shift in oneÕs attitude, an
extremely discrepant one has a proportionally reduced
effect, since it falls outside the personÕs ‘‘latitude of ac-
ceptance’’ (Sherif & Hovland, 1961) and stands out in a
stark contrast to oneÕs initial opinion. The notion that
social influence declines with distance has been incorpo-
rated in Davis et al.Õs (1997) social judgment scheme. This
model describes how the opinions of groups (e.g., com-
mittees, juries) are aggregated during discussion to es-
tablish the groupÕs consensual judgment. An element of
the model is the idea that a discrepant opinionÕs impact on
group decision quickly declines as the discrepancy in-
creases. In sum, the prediction based on attitude-change
studies is that distant advice will be weighted less than
near advice.
Using advice to improve accuracy
A major motivation for seeking advice is the expec-
tation of improving judgment accuracy.
1
Numerous
studies have indeed shown that combining multiple es-
timates tends to improve predictions (e.g., Armstrong,
2001; Ashton & Ashton, 1985; Libby & Blashfield, 1978;
Sniezek & Buckley, 1995; Sniezek & Henry, 1989; Sorkin
et al., 2001; Winkler & Poses, 1993; Yaniv, 1997; Yaniv
& Hogarth, 1993; Zarnowitz, 1984).
A number of formal models provide a theoretical
basis for understanding of when and how combining
estimates improves accuracy. Accuracy is measured in
terms of mean absolute error or judgment–criterion
correlation. These include models based on the Con-
dorcet jury theorem (majority rules/binary issues) and
group signal-detection theory (Sorkin et al., 2001),
models for combining subjective probabilities from
multiple judges (Budescu & Rantilla, 2000; Wallsten,
Budescu, Erev, & Diederich, 1997), and models for
combining point forecasts (Clemen, 1989; Hogarth,
1978). In the case of quantitative judgments, a brief
outline can show how and why improvement is to be
expected from the use of advice. According to the
Thurstonian view, a subjective forecast about an ob-
jective event is the sum of three components: the
‘‘truth,’’ a constant bias, and random error. Statistical
principles guarantee that forecasts formed by averaging
several sources have lower variability than the individual
opinions. The combined forecasts are expected to con-
verge about the truth if the bias is zero or fairly small
(e.g., Einhorn, Hogarth, & Klempner, 1977). In the
present study we also investigate the effect of the re-
spondentsÕweighting policies on the accuracy of their
final judgments.
1
Here the analogy between advice use and attitude change breaks
down, since objective accuracy is not an issue in the study of attitudes.
I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13 3
Overview
In our studies we presented respondents with ques-
tions that had real consequences for them as decision
makers, since they received a bonus for making accurate
judgments. The respondents were given advice and the
principal measure was the weight placed on that advice
in their final decisions. The studies, which were con-
ducted on a computer due to their interactive nature,
shared the following general procedure. In the first
phase, respondents were presented with questions and
asked to state their estimates. In the second phase, they
were presented with the same questions along with es-
timates made by various advisors (other students). The
respondents were then asked to provide their estimates
once again. They were free to use the advice as they
wished. In Study 1 the advice was drawn at random
from a pool of advice. In Studies 2 and 3 the advice was
presented at one of three distance levels (near, inter-
mediate, or far). Thus, the advice had to be ‘‘custom-
made’’ on-line by the computer specifically for each
respondent, depending on his or her initial opinions in
the first phase.
Two important notes are in order. First, in all the
studies we paid a bonus for each final estimate with a
lower than average error, so it was in the respondentsÕ
interest to consider carefully and make the best use of
the advice in whatever manner they deemed appropri-
ate. Second, a major advantage of the present experi-
mental method (Studies 1–2) is the use of ecologically
valid advice, that is, advice sampled from pools (dis-
tributions) of actual estimates made by other individ-
uals. In the third study the advice was generated
mechanically as a simple transformation of the re-
spondentsÕinitial opinions. This method allowed us a
certain control that could not be obtained in Study 2,
at the expense of the ecological structure preserved in
the first two studies. In sum, two alternative opera-
tional definitions of advice distance were tested. We
compared the weighting policies, distance effects, and
accuracy gains obtained using either the ecological or
the mechanical advice.
Study 1: Weighting advice as a function of knowledge
The goal of the first study was to replicate and extend
the discounting phenomenon and, in particular, to test
whether advice discounting varies as a function of the
judgeÕs knowledge. Such a finding would provide further
support for our hypothesis. If discounting depends on
evidence retrieval, then those who are more knowl-
edgeable should place less weight on the advice than
those who are less knowledgeable. Studies 2 and 3 fur-
ther tested the interaction between advice distance and
knowledge.
Method
The first study investigated how people use advice
from a randomly drawn advisor in an ecological pool.
The experimental procedure was conducted individually
on personal computers. Fifteen questions about the
dates of historical events (within the last 300 years) were
presented sequentially on the computer display screen.
As shown in Table 1, in the first phase respondents were
shown one question at a time and asked to type in their
best estimate for each one via the computer keyboard; in
addition, they were asked to give lower and upper
boundaries such that the true answer would be included
between the limits with a probability of .95.
After the first phase was over, the respondents were
told that there would be a second phase in which they
would be presented with the same set of questions again.
Now, however, each question would be presented along
with two estimates: the respondentÕs own initial estimate
and that of an advisor. The respondents would then be
asked to give a second, possibly revised, estimate for the
question. No online feedback was given on the accuracy
of their own or the advisorsÕopinions (in particular, the
correct answers were never shown). The respondents
were told they would get a bonus at the end of the study,
depending on their overall accuracy (see below).
The advisorÕs estimate was randomly drawn by the
computer from a pool of 50 estimates collected in an
earlier study in which respondents were instructed
merely to provide the best estimate for each question.
The advisor varied from one question to the next, with
labels such as A, D, and J used to indicate that each
estimate came from a different individual. By sampling
estimates from pools of data, adequate ecological va-
lidity could be maintained. The dispersion of the esti-
mates and their errors corresponded to those that might
have been encountered in reality by our respondents
when seeking answers to such questions among their
peersundergraduate social science students.
The respondents ðN¼30Þwere undergraduate stu-
dents who participated either as part of their course
requirements or for a flat fee of 12 Israeli shekels. They
were all told that they would receive a bonus based on
the accuracy of their estimates. In particular, they would
Table 1
Sample question and outline of the general procedure
Phase 1 (series of 15 questions):
In what year was the Suez Canal first opened for use?
Your best estimate ____ (low estimate ____ high estimate ____)
Phase 2 (same 15 questions repeated):
In what year was the Suez Canal first opened for use?
Your previous best estimate was 1905
The best estimate of advisor K was 1830
Your final best estimate ____
4I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13
receive 1Israeli shekel ($0.30 at the time of the study) as
a bonus for each estimate that had a better than average
accuracy score. Altogether they could collect up to 15
shekels in bonus payment. Thus it was in their interest to
consider carefully and make the best use of the estimates
given to them. The bonus was based on the final esti-
mates (i.e., second phase).
Results
Advice weighting
The final estimate can be represented as a weighted
combination of the two prior estimatesinitial and ad-
vicewith the weights being proportional to the extent
of the shift towards (away from) the advice. We define
‘‘weight of advice ’’ ¼jfij=jaij, where i,f,anda
stand for initial, final, and advice, respectively; the
weight of advice is well-defined if the final estimate falls
between the initial estimate and the advice, as it did in
over 95% of the cases. The weight of advice, expressed as
a proportion, reflects the weight that a respondent as-
signs the advice (and is inversely related to the extent to
which the advice is discounted). Thus, the weight of
advice takes a value of 0 if, in making the final estimate,
the respondent adheres completely to his or her initial
estimate (100% discounting of the advice); the weight of
advice is 1.0 if the respondent shifts completely to the
advice (0% discounting). Intermediate weights indicate
that positive weights were assigned to both opinions
(partial discounting).
Whereas a weight of 0.50 for advice implies equal
weighting, the actual mean weight of advice (0.27) was
significantly lower, t29 ¼6:35;p<:01. Respondents
placed a higher weight on their own opinion than on
the advisorÕs opinion. This tendency was exhibited by
most respondents: 28 of the 30 respondents had a mean
weight of advice lower than 0.5. The respondentsÕ
means had an interquartile range from 0.19 to 0.47.
Further analysis examined the distribution of all 450
individual trials (30 respondents 15 questions). After
rounding to the nearest decimal, the weights of advice
were classified into three groups: low (0–.3), medium
(.4–.6), and high (.7–1.0). The percentages falling in
these groups were 58%, 20%, and 22%, respectively.
These results support the conclusion that individuals
tend to discount advice.
Weighting as a function of personal knowledge
Next we analyzed the weight of advice as a function of
the respondentsÕown knowledge, measured in terms of
their prior performance. The respondents were divided
into two groups (median split)high knowledge and low
knowledgeaccording to their accuracy (a function of
average absolute error) in Phase 1 of the study, that is,
depending on whether their average error fell below or
above the median. As Table 2 shows, the high-knowledge
group discounted the advice significantly more than the
low-knowledge group. The respective mean weights of
advice were 0.20 vs 0.33, t28 ¼2:65;p<:05.
Improving accuracy
Exposure to the advice helped respondents improve
their accuracy. The mean absolute error (in years) was
reduced from 56.2 (for the initial estimate) to 44.8 for
the combined estimate, Fð1;28Þ¼14:02;p<:01. Table
2 shows the accuracy gains for the two knowledge
groups, 15% for the high-knowledge group (error re-
duced from 46.3 to 38.9 years), and 21% for the low-
knowledge group (66.0–50.7). The low-knowledge group
seemed to benefit more from the advice, but the inter-
action between knowledge group and type of error
(initial vs final) was not significant, Fð1;28Þ¼1:68.
Discussion
A major conclusion regarding weighting policies in
this study is that decision makers tend to discount advice.
The respondents and the advisors were drawn from the
same population, with similar background knowledge;
on average, the respondentsÕaccuracy was on a par with
that of the advisors (mean absolute errors were 56.1 and
49.6, for respondents and advisors, respectively). Nev-
ertheless, the respondents placed greater weight on their
own judgments. They resolved the discrepancy between
their own and the other opinion by adhering to their own
opinion and making a token shift to the other opinion.
2
Table 2
Results from Study 1
JudgeÕs knowledge Weight of advice Absolute error % Improvement Absolute error*
Before After weight+.17
High 0.20 46.3 38.9 15 37.8
Low 0.33 66.0 50.7 21 48.1
*These are the mean absolute errors that would have been observed had respondents increased their actual weight of advice by 0.17 on every
single trial.
2
The results are similar to those obtained in Study 1 in Yaniv and
Kleinberger (2000). The main difference between the two studies is that
no feedback was given online in the present study (as noted in the method
section), whereas in the previous study feedbackthe correct answer
was given after each trial in the second phase, allowing respondents to
track the accuracy of the advice and of their own estimates.
I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13 5
These results suggest two opposite perspectives on
insight. On the one hand, the weights on advice were too
low, suggesting that respondentsÕevaluations of their
own knowledge were exaggerated overall. Indeed, peo-
ple reveal poor insight in over-estimating the chances
that their knowledge is correct (calibration curves reveal
overconfidence, e.g., Lichtenstein & Fischhoff, 1977).
On the other hand, respondents did not discount advice
indiscriminatelythose who knew less (in the first
phase) placed higher weight on the advice than those
who knew more. Such realism is also found in studies of
probabilistic confidence judgment, where calibration
curves are often found to be monotonically increasing,
thereby indicating that easy items are assigned higher
confidence levels than hard ones. Such findings indicate
that self-assessment is not a unidimensional concept.
The advice-discounting hypothesis can explain both
aspects of the present results. First, there is asymmetry
in access to the evidence underlying each opinion, such
that respondents are privy to their own thoughts but not
those of the advisor, and therefore weight their own
opinions more heavily. Second, those who know less
presumably retrieve fewer pieces of evidence to support
their estimate, so they tend to place higher weight on the
advice (compared with those who know more).
To what extent was advice underweighted? To get a
handle on this, we use as a first approximation the de-
viation of the average advice weight (0.27) from 0.50a
difference of 0.23. Another rough approximation of the
amount of underweighting can be obtained empirically
by calculating the ‘‘optimal’’ weight of advice on a trial-
by-trial basis. The optimal weights were calculated as-
suming that the true answer for each question was
known (hence a best weight of advice could be derived).
3
The average optimal empirical weight of advice was
0.44, compared with the actual weight of 0.27, so the
difference between them was 0.17.
To what extent might accuracy be improved if re-
spondents increased the weight of advice? There are
various ways to assess that potential improvement. The
following calculation is given as an illustration. We
calculated the final estimates that would have been ob-
tained if the respondents had increased the weight as-
signed to the advice on each particular trial by 0.17 (the
difference between the actual and the optimal weights).
As Table 2 shows, the new final estimates were slightly
more accurate than the actual final estimates (3–6%
gain, not significant, t29 ¼1:66;p¼:107). Most of the
gain in accuracy is already achieved by the respondentsÕ
actual final estimates. It seems that merely considering
an additional opinion is the key to achieving greater
accuracy, while its exact weighting is less critical. Studies
of combining forecasts suggest that accuracy (or fit) is
highly robust to deviations of the weights from the op-
timum (Blattberg & Hoch, 1990).
Study 1 sets the stage for Studies 2 and 3, in which we
systematically varied the distance of the advice from the
decision makerÕs initial opinion. We asked whether and
how respondentsÕweighting policy varies as a function
of advice distance.
Study 2: Weighting ecological advice as a function of
distance
We investigated how the distance of the advice from
oneÕs own opinion affects the weight it receives. The
advice was at one of three distance categories: near,
intermediate, or far. Each respondent experienced all
three distance conditions, with one-third of the trials in
each. The advisory estimates were designed online spe-
cifically for each respondent, depending on the estimates
he or she gave in the first phase. For each question, the
computer accessed a pool of estimates produced in
previous studies and selected advice from it. This pro-
cedure guaranteed that estimates were selected from
within the empirical distribution, and thus took into
account the natural spread of the estimates. This design
allowed us to test how people weight advice as a func-
tion of its distance from their initial opinions. In par-
ticular, we predicted that the greater the distance of the
advice, the lower the weight it would be assigned.
Moreover, we expected difference between high- and
low-knowledge judges.
Method
Procedure
The procedure included two phases, as in Study 1. In
the first phase the respondents ðN¼48Þwere asked to
produce estimates in answer to a list of questions. In the
second phase they received the same list of questions
along with advice and were instructed to provide their
final estimates. There were a total of 24 trials with one-
third of the questions in each of the three within-par-
ticipant distance conditions: near, intermediate, and far.
The three distance categories were presented in random
order.
Selection of advice
For each question we had a pool of 120 estimates
collected earlier in previous studies. For each respon-
dent the computer generated advice for each question
after phase 1 was over. The computer accessed the es-
timates for each question and sorted them in order of
absolute distance (in years) from the respondentÕs point
estimate (from the nearest to farthest). The advice to be
offered to the respondent was then chosen according to
its position relative to the initial estimate. In the near
3
The formula for deriving the weight of advice in estimating the
true answer was similar to the one used in Study 1.
6I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13
condition, the estimate in the 20th percentile was se-
lected (i.e., the 24th nearest out of 120 estimates). In
other words, 20% of the estimates were between the
initial estimate and the advice. For the intermediate
distance condition 55% of the estimates separated the
initial estimate from the advice, and for the far distance
condition the percentage was 90%. The mean absolute
distance of the advice from the respondentsÕinitial es-
timates was 24.1 in the near condition, and 50.1 and 93.8
in the intermediate and far conditions, respectively. The
questions were randomly assigned to the different con-
ditions.
Nothing was said to the respondents about how the
estimates were selected from the pools. As in Study 1, we
merely told respondents that the various pieces of advice
were initial estimates generated by individuals who had
participated in similar studies in the past. We also told
them that at the end of the study they would be awarded
a bonus for accuracy, 1 shekel ($0.30 at the time of the
study) for each estimate that had greater than average
accuracy. Thus, they could earn up to 24 shekels in
bonus payments altogether. Hence it was in their interest
to consider their answers carefully and make the best use
of the advice provided.
Results
As in the previous study, respondents were median-
split into two knowledge groups according to their mean
absolute error in the first phase of the study. The mean
weights are shown in Table 3. An analysis of variance
was performed on the weighting of advice with the de-
cision makerÕs knowledge (high, low) and advice dis-
tance (near, intermediate, far) as factors. There were
significant effects of knowledge, Fð1;46Þ¼ 23:55;p<
:001 and distance, Fð2;46Þ¼3:69;p<:05, as well as an
interaction, Fð2;46Þ¼7:95;p<:01. To understand the
interaction, the simple effects were examined. The simple
effect of knowledge was significant in the intermediate
advice condition, Fð1;46Þ¼ 26:8;p<:001, and in the
far advice condition, Fð1;46Þ¼31:1;p<:001, but not
in the near condition Fð1;46Þ¼1:65;p>:2. In sum,
the high-knowledge group generally placed less weight
on the advice than did the low-knowledge group;
moreover their weighting of the advice decreased with
distance.
The use of ecological advice improved accuracy by
about 20%. The mean absolute errors before and after
the advice was given were 50.1 and 40.2 years, respec-
tively, Fð1;46Þ¼48:9;p<:001. As Table 4 shows, the
accuracy gain was 6% for the high-knowledge group
(error reduced from 35.4 to 33.2 years), and 27% for the
low-knowledge group (from 63.5 to 46.7). This differ-
ence in accuracy gains led to a significant interaction
between knowledge group and type of error (initial vs
final), Fð1;46Þ¼27:3;p<:001.
Discussion
The high-knowledge respondents discounted the ad-
vice. Moreover, their weighting of the advice decreased
systematically with distance. The low-knowledge group
did not exhibit discounting nor did they display a clear
pattern in weighting the advice, perhaps because they
felt they could benefit even from distant advice (accu-
racy gains are shown in Table 4). We will return to this
issue in the third study, in which the advice was gener-
ated differently.
In this study, advice was drawn from ecological
samples of the estimates generated by other respondents
in earlier studies. Advice distance was operationally
defined relative to the natural distribution of the esti-
mates given for each question, so that, for instance, far
advice occupied the same relative position within the
respective distributions. In our view, this design pro-
vides two important advantages. First, it helps make the
advisory estimates seem realistic and believable, as
having indeed been generated by other respondents. A
second advantage is that the ecological design allows
easier generalizations from experiment to reality. A
disadvantage of the ecological design is that the absolute
distances of the advice from the initial opinions could
not be controlled. In particular, we did not control
whether the advice was in the direction of the truth
or leading away from the truth. In the next study we
included this factor as well in the analysis.
Study 3: Weighting mechanical advice as a function of
distance
In Study 3 the absolute distance of the advice was
controlled. Advice was created mechanically by adding
or subtracting a constant from the decision makerÕs
Table 3
Study 2: Weight of advice as a function of distance and the decision
makerÕs knowledge
Decision makerÕs
knowledge
Distance of advice
Near Intermediate Far
High 0.33 0.27 0.17
Low 0.44 0.53 0.49
Table 4
Study 2: Judgment errors before and after getting ecological advice
Decision makerÕs
knowledge
Absolute error % Improvement
Before After
High 35 33 6
Low 64 47 27
I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13 7
initial estimate. The use of advice that is a simple
transformation of the initial estimates does not abide by
the ecological constraints of the previous study, but it
allowed us to test further our hypothesis regarding the
influence of advice distance on weighting policies. We
did this by separating the trials into two conditions,
advice that was helpful (directed toward the truth) and
not helpful (directed away from the truth). Thus we
could analyze the effect of distance in either direction for
the low- and high-knowledge groups.
Method
The procedure included the same two phases as in the
previous study. In the first phase, the respondents
ðN¼76Þwere asked to produce estimates for 24 ques-
tions. In the second phase they received advice at vari-
ous distances from their initial estimates and were asked
to form their final estimates.
The procedure for generating the advice was as fol-
lows. Three sets of constants were created, based on the
mean absolute distances in Study 2. The near advice was
generated by either adding or subtracting one of the
following constants from the initial estimates: 15, 18, or
20 years. The intermediate distance advice was gener-
ated at distances of 40, 43, or 45 years. The far advice
was generated at distances of 70, 72, or 75 years. The
use of three constants at each distance category was
meant to obscure the underlying structure of the advice
set (which indeed was not transparent to any of the
respondents). Eight questions were randomly assigned
to each of the three advice distance conditions (near,
intermediate and far). The order of the various condi-
tions was randomized for each respondent and the
constants for creating the advice were sampled at ran-
dom. The other aspects of the study were identical to
those of the previous study, including the bonus for
accuracy.
Results
The sample was median-split into two groups accord-
ing to the respondentsÕdegree of knowledge (a function of
average absolute errors) in the first phase. The weight of
the advice was calculated as in Study 1. Table 5 shows the
mean weights as a function of the respondentsÕdegree
of knowledge and the advice distance. An analysis of
variance on the weights, with knowledge (high, low) and
distance (near, intermediate, far) as factors, showed the
following significance levels: distance, Fð2;148Þ¼
7:95;p<:005, knowledge, Fð1;74Þ¼3:89;p¼:052.
Since knowledge was a significant factor in Studies 1–2,
the one-tailed significance level p<:05 is warranted in
this case. The interaction was not significant, F<1.
Specifically, the high-knowledge group discounted the
advice more than the low-knowledge one. The weight
of advice decreased as its distance from the initial opinion
increased.
Next, the trials were separated into two conditions,
according to the direction of the advice: helpful advice
(pointing towards the truth) and unhelpful advice
(pointing away from the truth). (There were half in each
direction at each distance condition and for each re-
spondent, by design.) The weights of the unhelpful advice
were 0.29, 0.23, 0.18, for near, intermediate, and far, re-
spectively, for the high-knowledge group, and 0.34, 0.28,
0.34, for the low-knowledge group. The respective
weights of the helpful advice were 0.33, 0.33, 0.27, for the
high-knowledge group, and 0.42, 0.40, 0.29, for the low-
knowledge group. A three-way analysis of variance found
significant effects of knowledge, Fð1;74Þ¼4:18;p<:05,
direction, Fð1;39Þ¼19:1;p<:05, and distance, Fð2;78Þ
¼6:44;p<:05. There were no significant two-way
interactions, F<1, but there was a significant triple
interaction, Fð2;148Þ¼3:36;p<:05.
The effect of knowledge on the weight of advice was
shown in previous analyses. The direction effect means
that the helpful advice was weighted more than the
unhelpful advice. Respondents presumably retrieved
more support from their memory for the former type of
advice. The declining pattern of weights in the high-
knowledge condition was observed in both directions.
The pattern of weights in the low-knowledge condition
was not stable across directions. We will return to these
results and the differences between Studies 2 and 3 in the
final discussion.
In terms of accuracy, the mechanically generated
advice was not as helpful to respondents as was the
ecological advice in the first two studies. The mean ab-
solute error barely changed as a result of receiving the
advice (error reduced from 65.6 to 63.4), Fð1;74Þ<1,
yielding no significant accuracy gains, either overall or
in one of the knowledge groups, as Table 6 shows. The
results here greatly depart from those of Studies 1 and 2.
Table 5
Study 3: Weight of advice as a function of distance and the decision
makerÕs knowledge
Decision makerÕs
knowledge
Distance of advice
Near Intermediate Far
High 0.31 0.28 0.23
Low 0.38 0.34 0.30
Table 6
Study 3: Judgment errors before and after getting (mechanical) advice
Decision makerÕs
knowledge
Absolute error % Improvement
Before After
High 52 49 6
Low 81 79 2
8I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13
General discussion
We investigated two main aspects of advice use. The
first involves the influence of advice on the decision
makerÕs final judgment, and in particular the weight
assigned to advice. The second involves the accuracy
gains resulting from the weighting policy. We consider
each of these aspects in turn.
Weighting advice
A coherent picture emerges from the advice weighting
policies observed across the studies. First, the results of
Study 1 show egocentric discounting of advice. Second,
advice discounting was not indiscriminate; individuals
had a veridical view of their knowledge, so that the less
knowledgeable ones placed greater weight on the advice
(Studies 1–3). Third, the weight of advice declined with
the distance between the advice and their initial opinions
(Studies 2–3); this distance effect was exhibited in the
high-knowledge condition and to a lesser extent in the
low-knowledge condition as well.
Advice discounting: A self/other effect
The asymmetric weighting of oneÕs own and other
opinions is attributed to the fundamental asymmetry in
access to the underlying justifications for each opinion.
Decision makers can assess what they know and the
strength of their own opinions, but are far less able to
assess what an advisor knows and the reasons underly-
ing her/his opinions. Naturally, oneÕs confidence about a
given opinion (or hypothesis) is related to the amount of
evidence that one could readily recruit to support it.
Other things being equal, decision makers are likely to
feel more confident about their own opinion than about
the other opinion, hence their own estimate would re-
ceive greater weight than the advice. Earlier findings
suggest that respondents weight each opinion according
to the expertise ascribed to its source (Birnbaum &
Stegner, 1979; Birnbaum & Mellers, 1983). The self/
other asymmetry presumably enhances the expertise
ascribed to the self. This line of reasoning about infor-
mation asymmetry is also reminiscent of the principal-
agent problem in organizations (Eisenhardt, 1989).
There is also other evidence for advice discounting.
Harvey and Fischer (1997), using a cue-learning task,
had respondents make initial estimates and then final
estimates on the basis of a recommendation from an
advisor. They found a shift in judgment of about 20–
30% towards the advicea result consistent with what
we observed. Using a time-series forecasting task, Lim
and OÕConnor (1995) had respondents integrate a sta-
tistical forecast into their initial judgment-based fore-
cast. These respondents assigned about double the
weight to their own initial forecast than to the statistical
forecast. Sorkin et al. (2001) also report higher weights
placed on oneÕs own opinion in a group signal-detection
task. On each trial, one member was randomly selected
and told that she was to give the groupÕs answer, on the
basis of the other membersÕresponses. A participantÕs
weight was consistently higher when she was the desig-
nated responder.
There is evidence that such discounting also occurs in
professional settings. In his literature review on the
impact of genetic counseling, Kessler (1989) concludes
that genetic counseling does not produce dramatic
changes in counseleesÕreproductive decisions. The best
predictor of the post-counseling reproductive decision is
the counseleeÕs pre-counseling intentions. Advice dis-
counting may also be related to the publicÕs perception
of risks (such as environmental and health-related risks).
A recurring finding is that experts and the public differ
in their perception of such risks, thus hindering the
implementation of public policy (Flynn, Slovic, &
Mertz, 1993). ExpertsÕrisk communication can be
viewed as advice to individuals in their daily decisions
regarding the safety measures they need to take against
various types of risks (e.g., radiation from mobile
phones, using a mobile phone while driving). The ob-
served skepticism towards expertise can be viewed as a
form of discounting of the expertsÕadvice. Finally, the
phenomenon that individuals stick closely to their initial
opinions is also consistent with the findings of perse-
verance and resistance to change known from classical
research on attitudes (e.g., Sherman & Cohen, 2002).
Alternative accounts
Motivational effects
The explanation of the self/other effect in terms of
differential information access seems preferable to al-
ternative explanations that posit either a self-serving
bias (e.g., an optimistic bias) or commitment to oneÕs
past decisions as the root of discounting othersÕviews.
To be sure, self-serving biases pervade interpersonal
comparisons, in that, for example, people believe that
they have lower chances of experiencing negative life
events, such as car accidents and strokes, than others do
or that they rank higher than others on various abilities
and attributes, such as driving ability and social skills
(e.g., Brown, 1986). But a bias of this sort does not
readily explain respondentsÕweighting policies for ad-
vice, especially the sensitivity of those policies to the
respondentsÕown knowledge (Studies 1–3) and their
sensitivity to the quality of the advice (Yaniv & Klein-
berger, 2000).
Commitment to oneÕs past decisions is a powerful
motive in decision making, yet it cannot readily explain
the findings either. The antecedents of commitmenthigh
costs for being inconsistent, the need to justify decisions to
others, having to admit past mistakes, and having to save
I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13 9
face with respect to ego-involving issueswere largely
absent in the present studies. Our respondents made their
judgments in a private setting (by entering responses into
a computer file), received incentives for accuracy, and
were not asked to justify their estimates.
A cognitive explanation based on informational
asymmetry and the assessment of available evidence is
more parsimonious and hence superior to those based
on a self-serving bias or commitment because it can
readily account for the finding that respondentsÕweights
on advice are sensitive to the quality of the advice
(Yaniv & Kleinberger, 2000) as well as their own
knowledge (e.g., Study 1), without making unnecessary
assumptions.
Information integration
Our account of the present results on weighting ad-
vice is linked to theories in the tradition of information
integration. Such theories posit simple cognitive pro-
cesses to explain the updating of impressions and beliefs.
Anderson (1968) attributes the primacy effect in im-
pression formation to attention decrement over succes-
sive serial positions as the weights given to later cues in a
sequence decrease. Expanding on such ideas, Hogarth
and Einhorn (1992) introduced a formal model of how
people update their beliefs on the basis of sequential
information (e.g., pieces of evidence in a trial or a list of
personality traits). A central characteristic of the up-
dating process, according to Hogarth and Einhorn, is
the response mode, namely whether updating is made
globally at the end of the sequence or step by step, after
each item is presented.
According to Hogarth and Einhorn, the end-of-se-
quence mode is conducive to primacy effects, and the
step-by-step mode, to a recency effect. Our respondentsÕ
behavior shows a primacy effect, as they preferred their
own opinion to the advice (e.g., Study 1). In this respect
our findings are in agreement with the prediction for the
end-of-sequence mode, based on the belief-updating
model. But our decision-advice-revise procedure does
not fall squarely into either of the response mode cate-
gories‘‘end of sequence’’ or ‘‘step by step’’since re-
spondents had in fact generated one of the two estimates
themselves in an earlier phase. This differs from infor-
mation integration studies, where the sequences of items
are fully controlled by the experimenter. Moreover, the
sequential nature of the belief-updating model makes
the order of presentation a key factor. Our procedure
highlights the judgeÕs own opinion, hence order, being
just one factor among others, may not necessarily be as
important a factor as self/other asymmetry.
The present studies like the information-integration
approach, focus on respondentsÕweighting policies. The
present studies highlight additional key features,
including the use of realistic (rather than fictional)
information, thereby enabling respondents to rely on
pre-experimental knowledge. In sum, I suggest that our
decision-advice-revise procedure adds another aspect to
information integration, one which has not been ex-
plored so far, and is potentially fruitful.
The effect of advice distance on the revision of opinion
We hypothesized that the weight of advice would
decline as its distance from the respondentÕs initial
opinion grew larger. It appears that knowledge modu-
lates the distance effect. The decline of weight with dis-
tance was shown consistently for the high-knowledge
respondents (in Studies 2–3), but less regularly for the
low-knowledge respondents (in Study 3, but not in
Study 2). While we did not predict a difference between
high- and low-knowledge respondents, we can make
sense of these findings.
The more knowledgeable individuals presumably
have a narrower latitude of acceptance than the less
knowledgeable individuals. Therefore the two groups
differ in their attributions. The more knowledgeable
judges, according to this hypothesis, are more likely to
attribute the discrepancy between their own and another
personÕs opinion to the other personÕs fault or error
rather than their own. In particular, upon encountering
a different opinion the two groups proceed with different
inferencesthe more knowledgeable respondents with
‘‘I guess the advisor is wrong’’ and the less knowledge-
able ones with ‘‘I guess Iamwrong.’’ Such attributions
might evolve from the respondentsÕdifferent experi-
ences. The initial views of the knowledgeable judges are
often in the neighborhood of the best solution, hence
they tend to assume that near advice is of good quality
while far advice is of lower quality. The less knowl-
edgeable judges might be less inclined to use distance as
a predictor of advice quality, since their own hunches
are less accurate. This might explain why the distance
effect was less pronounced for the low-knowledge
respondents.
The present findings on the distance effect are con-
sistent with earlier work on attitude change which sug-
gests that the influence of a message (measured as a
proportional change) tends to decrease as a function of
its discrepancy from the recipientÕs initial attitude
(Bochner & Insko, 1966).
4
In a more recent work on
stereotype change, Kunda and Oleson (1997) tested the
influence of a single counter-stereotypic example on
existing personal stereotypes. For instance, given
the stereotype that public relations (PR) people are ex-
troverts, Kunda and Oleson presented to respondents
4
In fact, this could lead to a phenomenon called the boomerang
effect. When a message is highly discrepant, judges shift their attitude
less toward it than they would have done had the message been less
discrepant.
10 I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13
either an extremely deviant example (i.e., an extremely
introverted PR person) or a moderately deviant example
(a slightly introverted PR person). The extreme example
had less influence on stereotype change, in accord with
predictions of assimilation/contrast theories (Sherif &
Hovland, 1961). Specifically, a slightly deviant example
is easily assimilated into the stereotype and hence can
change it, whereas an extremely discrepant one is in
great contrast to the stereotype and so is likely to be
discounted. Recent work on anchoring has also shown
that extreme anchors have proportionally less effect on
judgment than moderate ones (Marti & Wissler, 2000;
Wegener, Petty, Detweiler-Bedell, & Jarvis, 2001). Ac-
cording to these authors, judges tend to discredit or
argue against extreme anchors, thereby making them
less influential.
In a different vein, early studies on information in-
tegration (Anderson & Jacobson, 1965) and additive
models in judgment (Slovic, 1966) suggest that judges
discount inconsistent cues. Moreover studies of the
process of combining opinions show that judges give
greater weight to consensus opinions while discounting
outlier opinions (Yaniv, 1997). Finally, studies of group
decision-making suggest that a discrepant opinionÕs
impact on the groupÕs final decision declines as the dis-
crepancy increases (Davis et al., 1997). In the foregoing
studies an opinion (or cue) is discounted due to its dis-
tance from the consensus. In contrast, in the studies re-
viewed above an opinion is discounted due to its
distance from the judgeÕsinitial opinion. The common
thread between the two phenomena is that inconsistent
information is discounted.
The benefit of advice
By consulting one advisory opinionrandomly sam-
pled from an ecological pool of estimatesindividuals in
Study 1 improved their estimation accuracy by about
20%. There is a straightforward important consequence
of such findings which often escapes peopleÕs attention.
People do not always realize that in order to be helpful,
the other opinion need not come from a smarter or more
knowledgeable individual than the decision makers
themselves. To reap the accuracy gains from aggrega-
tion, the additional opinions only need come from in-
dependent advisors (though small deviations from
perfect independence still permit appreciable gains; e.g.,
Johnson, Budescu, & Wallsten, 2001).
That combining opinions improves accuracy is one of
the most robust findings in the judgment literature. The
explanation for the observed accuracy gains in the
present studies was outlined briefly earlierit relies (as
all formal models do) on the central limit theorem in
statistics as well as certain empirical facts about the task,
such as the bias and inter-judge correlations (e.g.,
Wallsten et al., 1997; Johnson et al., 2001). Indeed, the
results of Study 2 also show accuracy gains. In Study 3,
the advice was generated mechanically, by arbitrarily
adding or subtracting a constant from the original
opinion in the first phase. Since the advice was highly
correlated with the initial opinion, we did not expect
accuracy gains in that study.
Receiving and using other types of advice
The present research involved quantitative advice
about factual matters (dates of events). Future investi-
gations could and should be extended to include other
types of advice. One might distinguish between qualita-
tive (verbal) advice and quantitative advice. In particu-
lar, verbal advice does not lend itself to the same sort of
weighting evaluated in the present studies.
In addition, one could distinguish between opinions
about matters of fact (estimates or forecasts) and about
matters of taste (evaluations or attitudes). The benefit
accrued from combining opinions about matters of fact
is both demonstrable and understood theoretically. In
contrast, simple aggregation of tastes for the purpose of
individual decision makingsuch as opinions about a
movie that one has not seen or a restaurant that one has
yet to try outraises conceptual difficulties. People are
entitled to their different tastes and it is less clear how
individuals might combine their own preferences with
those of a friend, colleague, or professional advisor.
Thus a theory about combining opinions in matters of
taste is in order. A related question is whether consulting
othersÕopinions about matters of taste helps improve
decision quality (assuming an acceptable definition of
quality).
The present perspective suggests ways of thinking
about how these other types of advice might be inte-
grated. I suggest that qualitative advice, such as opin-
ions about taste, helps decision makers overcome certain
common weaknesses in reasoning. The relevant weak-
nesses include decision makersÕfailure to generate en-
ough alternatives for choice and their tendency to try to
confirm rather than disconfirm their prior views. For
example, SvensonÕs (1996) differentiation-consolidation
theory claims, in the tradition of dissonance theories,
that self-confirmation is an ongoing, continuous process
through which individuals construct justifications for
their decisions.
I suggest that receiving advice (of any type) serves an
adaptive function since it helps individuals overcome
self-confirmation tendencies. Advisors can expose deci-
sion makers to unattended alternatives and unintended
consequences, thereby challenging them to rethink their
prior opinions and weigh the new and different opinions
using some sort of an internal negotiation process
that eventually yields a compromise between the two
opinions. I do not claim that advisors are free of rea-
soning biases, but rather that, being independent, they
I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13 11
effectively challenge the decision makers with ideas that
they might not gather on their own otherwise. Related
suggestions appear, for instance, in Jonas and FreyÕs
(2003) findings that advisors conduct a balanced infor-
mation search and, under certain conditions, transmit
both confirming and disconfirming information to per-
sonal decision makers. Last but not least, it appears that
a most promising avenue for studying further the impact
and benefit of advice about matters of taste involves the
role of the ‘‘personal match’’ between the givers and
receivers of advice. Presumably the greater the perceived
similarity in characteristics (e.g., traits, background,
and education), the greater the impact and benefit of
receiving the advice.
In sum, researchers of individual decision-making
have traditionally developed and investigated various
decision-support systems that might help individuals
improve their decisions (decision trees, formal models,
computer models, etc.). I suggest that the social-cogni-
tive function of seeking advice as a ‘‘corrective proce-
dure’’ or support system for the individual decision
maker has not been explored sufficiently. It is not sur-
prising that advice-seeking pervades daily decisions,
ranging from the choice of a movie to a decision about
the promotion of an employee. What is surprising is that
so little attention has been paid in decision research to a
process so fundamental in real life. It is imperative for
future research to consider the procedures by which
various type of advice (e.g., qualitative verbal advice,
opinions about matters of taste) are elicited and used
best.
References
Ajzen, I. (2001). Nature and operation of attitudes. Annual Review of
Psychology, 52, 27–58.
Anderson, N. H. (1968). Application of a linear-serial model to a
personality-impression task using serial presentation. Journal of
Personality and Social Psychology, 10, 354–362.
Anderson, N. H., & Jacobson, A. (1965). Effect of stimulus inconsis-
tency and discounting instructions in personality impression
formation. Journal of Personality and Social Psychology, 2, 531–
539.
Armstrong, J. S. (2001). Principles of forecasting: A handbook for
researchers and practitioners. Dordrecht, Netherlands: Kluwer.
Aronson, E., Turner, J., & Carlsmith, M. (1963). Communicator
credibility and communicator discrepancy as determinants of
opinion change. Journal of Abnormal and Social Psychology, 67,
31–36.
Ashton, A. H., & Ashton, R. H. (1985). Aggregating subjective
forecasts: Some empirical results. Management Science, 31, 1499–
1508.
Birnbaum, M. H., & Mellers, B. A. (1983). Bayesian inference:
Combining base rates with opinions of sources who vary in
credibility. Journal of Personality and Social Psychology, 45, 792–
804.
Birnbaum, M. H., & Stegner, S. E. (1979). Source credibility in social
judgment: Bias, expertise, and the judgeÕs point of view. Journal of
Personality and Social Psychology, 37, 48–74.
Blattberg, R. C., & Hoch, S. J. (1990). Database models and
managerial intuition: 50% model + 50% manager. Management
Science, 36, 887–899.
Bochner, S., & Insko, C. A. (1966). Communicator discrepancy, source
credibility, and opinion change. Journal of Personality and Social
Psychology, 4, 614–621.
Brown, J. D. (1986). Evaluations of self and others: Self-enhancement
biases in social judgments. Social Cognition, 4, 353–376.
Budescu, D. V., & Rantilla, A. K. (2000). Confidence in aggregation of
expert opinions. Acta Psychologica, 104, 371–398.
Clemen, R. T. (1989). Combining forecasts: A review and annotated
bibliography. International Journal of Forecasting, 5, 559–
583.
Davis, J. H., Zarnoth, P., Hulbert, L., Chen, X.-p., Parks, C., & Nam,
K. (1997). The committee charge, framing interpersonal agreement,
and consensus models of group quantitative judgment. Organiza-
tional Behavior and Human Decision Processes, 72, 137–157.
Einhorn, H. J., Hogarth, R. M., & Klempner, E. (1977). Quality of
group judgment. Psychological Bulletin, 84, 158–172.
Eisenhardt, K. (1989). Agency theory: An assessment and review.
Academy of Management Review, 14, 57–74.
Flynn, J., Slovic, P., & Mertz, C. K. (1993). Decidedly different: Expert
and public views of risks from a radioactive waste repository. Risk
Analysis, 13, 643–648.
Harvey, N., & Fischer, I. (1997). Taking advice: Accepting help,
improving judgment and sharing responsibility. Organizational
Behavior and Human Decision Processes, 70, 117–133.
Hogarth, R. M. (1978). A note on aggregating opinions. Organiza-
tional Behavior and Human Performance, 21, 40–46.
Hogarth, R. M., & Einhorn, H. J. (1992). Order effects in belief
updating: The belief-adjustment model. Cognitive Psychology, 24,
1–55.
Johnson, T. R., Budescu, D. V., & Wallsten, T. S. (2001). Averaging
probability judgments: Monte Carlo analyses of asymptotic diag-
nostic value. Journal of Behavioral Decision Making, 14, 123–140.
Jonas, E., & Frey, D. (2003). Information search and presentation in
advisor-client interactions. Organizational Behavior and Human
Decision Processes, 91, 154–168.
Jungermann, H. (1997). When you canÕt do it right: Ethical dilemmas
of informing people about risks. Risk Decision and Policy, 2, 131–
145.
Kennedy, J., Kleinmuntz, D. N., & Peecher, M. E. (1997). Determi-
nants of the justifiability of performance in ill-structured audit
tasks. Journal of Accounting Research, 35, 105–123.
Kessler, S. (1989). Psychological aspects of genetic counseling: A
critical review of the literature dealing with education and
reproduction. American Journal of Medical Genetics, 34, 340–
353.
Kunda, Z., & Oleson, K. C. (1997). When exceptions prove the rule:
How extremity of deviance determines the impact of deviant
examples on stereotypes. Journal of Personality and Social Psy-
chology, 72, 965–979.
Libby, R., & Blashfield, R. K. (1978). Performance of a composite as a
function of the number of judges. Organizational Behavior and
Human Performance, 21, 121–129.
Lichtenstein, S., & Fischhoff, B. (1977). Do those who know more also
know more about how much they know? Organizational Behavior
and Human Performance, 20, 159–183.
Lim, J. S., & OÕConnor, M. (1995). Judgmental adjustment of initial
forecasts: Its effectiveness and biases. Journal of Behavioral
Decision Making, 8, 149–168.
Marti, M. W., & Wissler, R. L. (2000). Be careful what you ask for:
The effect of anchors on personal injury damages awards. Journal
of Experimental Psychology: Applied, 6, 91–103.
Sherif, M., & Hovland, C. I. (1961). Social judgment: Assimilation and
contrast effects in communication and attitude change. New Haven,
Ct: Yale University Press.
12 I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13
Sherman, D. K., & Cohen, G. L. (2002). Accepting threatening
information: Self-affirmation and the reduction of defensive biases.
Current Directions in Psychological Science, 11, 119–122.
Slovic, P. (1966). Cue-consistency and cue-utilization in judgment. The
American Journal of Psychology, 79, 427–434.
Sniezek, J. A., & Buckley, T. (1995). Cueing and cognitive conflict in
judge-advisor decision making. Organizational Behavior and Hu-
man Decision Processes, 62, 159–174.
Sniezek, J. A., & Henry, R. A. (1989). Accuracy and confidence in
group judgment. Organizational Behavior and Human Decision
Processes, 43, 1–28.
Sorkin, R. D., Hayes, C. J., & West, R. (2001). Signal detection
analysis of group decision making. Psychological Review, 108, 183–
203.
Svenson, O. (1996). Decision making and the search for fundamental
psychological regularities: What can be learned from a process
perspective? Organizational Behavior and Human Decision Pro-
cesses, 65, 252–267.
Wallsten, T. S., Budescu, D. V., Erev, I., & Diederich, A. (1997).
Evaluating and combining subjective probability estimates. Journal
of Behavioral Decision Making, 10, 243–268.
Wegener, D. T., Petty, R. E., Detweiler-Bedell, B. T., & Jarvis, W. B.
G. (2001). Implications of attitude change theories for numerical
anchoring: Anchor plausibility and the limits of anchor effective-
ness. Journal of Experimental Social Psychology, 37, 62–69.
Winkler, R. L., & Poses, R. M. (1993). Evaluating and combining
physiciansÕprobabilities of survival in an intensive care unit.
Management Science, 39, 1526–1543.
Yaniv, I. (1997). Weighting and trimming: Heuristics for aggregating
judgments under uncertainty. Organizational Behavior and Human
Decision Processes, 69, 237–249.
Yaniv, I., & Hogarth, R. M. (1993). Judgmental versus statistical
prediction: Information asymmetry and combination rules. Psy-
chological Science, 4, 58–62.
Yaniv, I., & Kleinberger, E. (2000). Advice taking in decision making:
Egocentric discounting and reputation formation. Organizational
Behavior and Human Decision Processes, 83, 260–281.
Yates, J. F., Price, P. C., Lee, J., & Ramirez, J. (1996). Good
probabilistic forecasters: The ‘‘consumerÕs’’ perspective. Interna-
tional Journal of Forecasting, 12, 41–56.
Zarnoth, P., & Sniezek, J. A. (1997). The social influence of confidence
in group decision making. Journal of Experimental Social Psychol-
ogy, 33, 345–366.
Zarnowitz, V. (1984). The accuracy of individual and group forecasts
from business and outlook surveys. Journal of Forecasting, 3, 11–
26.
Zimbardo, P. G., & Leippe, M. R. (1991). The psychology of attitude
change and social influence. Philadelphia: Temple University.
Received 27 June 2002
I. Yaniv / Organizational Behavior and Human Decision Processes 93 (2004) 1–13 13
... Advice can improve forecasts by offering new information and viewpoints that a decision-maker can incorporate into their forecast (Yaniv, 2004b). However, people generally do not take advice as much as is optimal and may be especially unlikely to take advice from an advisor who supports an outgroup (Wanzel et al., 2017;Yaniv, 2004a;Yaniv & Milyavsky, 2007). Yet, advice from an advisor supporting an outgroup may actually be the best advice to take, as it can challenge opinions the decision-maker holds. ...
... Research on advice utilization (Larrick & Soll, 2006;Yaniv, 2004a) and decision-making (Dawes, 1979) has empirically verified the benefits of equally weighed independent pieces of information when making a numerical forecasting decision. Yet, people fail to follow this when receiving advice; research on egocentric discounting has found that people do not utilize advice as much as is optimal and weigh their own opinion more heavily than advice (see Van Swol et al., 2018, for review). ...
... Yet, people fail to follow this when receiving advice; research on egocentric discounting has found that people do not utilize advice as much as is optimal and weigh their own opinion more heavily than advice (see Van Swol et al., 2018, for review). Egocentric discounting increases with more disagreement with the advisor (Wanzel et al., 2017;Yaniv, 2004a;Yaniv & Milyavsky, 2007). Due to naïve realism (Ross & Ward, 1995), decisionmakers may assume advisors who do not share their views are in error (Pronin et al., 2004). ...
... This has been found in many experiments using a range of quantitative estimation tasks from general knowledge and cue learning to business forecasting (e.g. Goodwin & Fildes, 1999;Harvey & Fischer, 1997;Hütter & Fiedler, 2019;Logg et al., 2019;Önkal et al., 2009;Soll & Larrick, 2009;Yaniv, 2004aYaniv, , 2004bYaniv & Kleinberger, 2000). Nonetheless, advice is rarely used to its full potential because people typically do not take sufficient account of it. ...
... Nonetheless, advice is rarely used to its full potential because people typically do not take sufficient account of it. They overweigh their own estimation relative to that of an advisor, a phenomenon known as 'egocentric advice discounting' or simply 'advice discounting' (Yaniv & Kleinberger, 2000;Yaniv, 2004aYaniv, , 2004b. ...
... One such mechanism is anchoring: people tend to anchor too closely to their initial estimate, which reduces deliberation on the advice and prevents the sufficient adjustment of the initial estimate (Lim & O'Connor, 1995;Tversky & Kahneman, 1974). Another hypothesised mechanism is information asymmetry between the judge and the advisor: the judge has access to her own justifications but she is not aware of the justifications of the advisor (Yaniv & Kleinberger, 2000;Yaniv, 2004aYaniv, , 2004b. This account draws from Support theory which purports that the judged probability of an uncertain event is a function of how detailed the description of that event is (Tversky & Koehler, 1994). ...
Article
Full-text available
Evidence-based algorithms can improve both lay and professional judgements and decisions, yet they remain underutilised. Research on advice taking established that humans tend to discount advice-especially when it contradicts their own judgement ("egocentric advice discounting")-but this can be mitigated by knowledge about the advisor's past performance. Advice discounting has typically been investigated using tasks with outcomes of low importance (e.g. general knowledge questions) and students as participants. Using the judge-advisor framework, we tested whether the principles of advice discounting apply in the clinical domain. We used realistic patient scenarios, algorithmic advice from a validated cancer risk calculator, and general practitioners (GPs) as participants. GPs could update their risk estimates after receiving algorithmic advice. Half of them received information about the algorithm's derivation, validation, and accuracy. We measured weight of advice and found that, on average, GPs weighed their estimates and the algorithm equally-but not always: they retained their initial estimates 29% of the time, and fully updated them 27% of the time. Updating did not depend on whether GPs were informed about the algorithm. We found a weak negative quadratic relationship between estimate updating and advice distance: although GPs integrate algorithmic advice on average, they may somewhat discount it, if it is very different from their own estimate. These results present a more complex picture than simple egocentric discounting of advice. They cast a more optimistic view of advice taking, where experts weigh algorithmic advice and their own judgement equally and move towards the advice even when it contradicts their own initial estimates.
... Although common in many social settings, we argue that direct social influence does not consider the complexity of the digital influence landscape. Direct social influence has long been studied outside the domain of social media platforms, e.g., opinion change in social psychology (Yaniv 2004;Bonaccio and Dalal 2006;Sherif et al. 1965;Festinger and Carlsmith 1959;Rader et al. 2017) and in opinion dynamics in sociology (Flache et al. 2017;Deffuant et al. 2000;DeGroot 1974;Friedkin and Johnsen 1990). Direct influence assumes the unfiltered exposure to another person's belief (e.g., an advisor) changes a privately held belief. ...
... Linear models, however, need to explain the non-linear dynamics of belief escalations often observed in online and lab settings (Bail et al. 2018;Pescetelli and Yeung 2020b). Models that try to account for these effects-e.g., similarity bias and repulsive influence (Flache et al. 2017)-often use parameters that are difficult to match with the well-known cognitive processes underlying opinion change (Resulaj et al. 2009;Fleming et al. 2018;Yaniv 2004). Opinion change has been the focus of an active investigation in cognitive neuroscience and social psychology (Bonaccio and Dalal 2006). ...
... Opinion change has been the focus of an active investigation in cognitive neuroscience and social psychology (Bonaccio and Dalal 2006). This research shows that people update their opinions based on subjective estimates of uncertainty in their beliefs: more confident opinions are more influential in group settings (Price and Stone 2004;Penrod and Cutler 1995;Sniezek and Van Swol 2001), and confident individuals show smaller opinions shifts (Soll and Mannes 2011;Yaniv 2004;Becker et al. 2017). Opinion dynamics models have used confidence to model the susceptibility to persuasion-or vice versa the influence of an opinion (Hegselmann and Krause 2015). ...
Article
Full-text available
Bots’ ability to influence public discourse is difficult to estimate. Recent studies found that hyperpartisan bots are unlikely to influence public opinion because bots often interact with already highly polarized users. However, previous studies focused on direct human-bot interactions (e.g., retweets, at-mentions, and likes). The present study suggests that political bots, zealots, and trolls may indirectly affect people’s views via a platform’s content recommendation system's mediating role, thus influencing opinions without direct human-bot interaction. Using an agent-based opinion dynamics simulation, we isolated the effect of a single bot—representing 1% of nodes in a network—on the opinion of rational Bayesian agents when a simple recommendation system mediates the agents’ content consumption. We compare this experimental condition with an identical baseline condition where such a bot is absent. Across conditions, we use the same random seed and a psychologically realistic Bayesian opinion update rule so that conditions remain identical except for the bot presence. Results show that, even with limited direct interactions, the mere presence of the bot is sufficient to shift the average population’s opinion. Virtually all nodes—not only nodes directly interacting with the bot—shifted towards more extreme opinions. Furthermore, the mere bot’s presence significantly affected the internal representation of the recommender system. Overall, these findings offer a proof of concept that bots and hyperpartisan accounts can influence population opinions not only by directly interacting with humans but also by secondary effects, such as shifting platforms’ recommendation engines’ internal representations. The mediating role of recommender systems creates indirect causal pathways of algorithmic opinion manipulation.
... Many empirical investigations into opinion dynamics draw inspiration from psychology studies that investigated opinion change [9][10][11][12]. All of these studies served as guidelines for the experimental design of the later opinion dynamics studies. ...
Preprint
Full-text available
A model needs to make verifiable predictions to have any scientific value. In opinion dynamics, the study of how individuals exchange opinions with one another, there are many theoretical models which attempt to model opinion exchange, one of which is the Martins model, which differs from other models by using a parameter that is easier to control for in an experiment. In this paper, we have designed an experiment to verify the Martins model and contribute to the experimental design in opinion dynamic with our novel method.
... If no form of data was available in a paper, we contacted the corresponding and/or first author via email. Two attempts were made to contact authors and effects were excluded when we received no response or were informed that data were no longer available (i.e., Sciandra, 2019;Scopelliti et al., 2015;See et al., 2011;Study 4;Sniezek et al., 2004;Tzioti et al., 2014;Yaniv, 2004a;Yaniv & Kleinberger, 2000, Studies 2 to 4;Wanzel et al., 2017). ...
Article
Full-text available
The degree to which people take advice, and the factors that influence advice-taking, are of broad interest to laypersons, professionals, and policy-makers. This meta-analysis on 346 effect sizes from 129 independent datasets ( N = 17, 296) assessed the weight of advice in the judge-advisor system paradigm, as well as the influence of sample and task characteristics. Information about the advisor(s) that is suggestive of advice quality was the only unique predictor of the overall pooled weight of advice. Individuals adjusted estimates by 32%, 37%, and 48% in response to advisors described in ways that suggest low, neutral, or high quality advice, respectively. This indicates that the benefits of compromise and averaging may be lost if accurate advice is perceived to be low quality, or too much weight is given to inaccurate advice that is perceived to be high quality. When examining the three levels of perceived quality separately, advice-taking was greater for subjective and uncertain estimates, relative to objective estimates, when information about the advisor was neutral in terms of advice quality. Sample characteristics had no effect on advice-taking, thus providing no evidence that age, gender, or individualism influence the weight of advice. The findings contribute to current theoretical debates and provide direction for future research.
... In competitive tasks, having a sense of who is more likely to succeed is useful for deciding whether to compete or opt out 14 . In cooperative tasks, having a sense of who is more likely to be correct is critical for assigning appropriate weight to others' advice 15,16 , or deciding whose opinion to follow 17 . In addition, the confidence that others report is often biased 3 , and can serve to manage social influence 12,[18][19][20] , and it is therefore useful to independently verify such reports. ...
Article
Full-text available
Computing confidence in one’s own and others’ decisions is critical for social success. While there has been substantial progress in our understanding of confidence estimates about oneself, little is known about how people form confidence estimates about others. Here, we address this question by asking participants undergoing fMRI to place bets on perceptual decisions made by themselves or one of three other players of varying ability. We show that participants compute confidence in another player’s decisions by combining distinct estimates of player ability and decision difficulty – allowing them to predict that a good player may get a difficult decision wrong and that a bad player may get an easy decision right. We find that this computation is associated with an interaction between brain systems implicated in decision-making (LIP) and theory of mind (TPJ and dmPFC). These results reveal an interplay between self- and other-related processes during a social confidence computation.
... Past research asserts that seeking advice from subject matter experts improves the quality and speed of managerial decisions (Alexiev et al., 2010;Ma et al., 2020). Knowledge about a decision problem may be held by other individuals (McFadyen and Cannella, 2004), so interacting with these individuals and seeking their advice can help managers gain knowledge that enhances the quality and speed of decisions (Yaniv, 2004;Sadeghi et al., 2021). The network of advisers comprises individuals with whom the manager has personal ties (Nebus, 2006), and advisers beyond that network are only reached out to in exceptional cases (Levin et al., 2011). ...
Article
Full-text available
Purpose Rethinking how to build resilience in supply chains is once again highlighted by COVID-19. Research on supply chain resilience has established flexibility as a firm-level antecedent that contributes to supply chain resilience. However, the authors know little about how supply chain flexibility is developed within a firm. Drawing on social capital theory, the authors claim that the way supply chain managers are embedded in their social networks plays a critical role in developing this antecedent. Specifically, the authors hypothesize that supply chain managers' structural and relational embeddedness in their reference network, comprised of individuals from whom they seek advice, is instrumental to developing supply chain flexibility, which subsequently enhances the firm's supply chain resilience. Design/methodology/approach Survey data collected from 485 manufacturing firms in Australia and Hayes and Preacher's (2014) parallel multiple mediator model were employed to empirically test the hypotheses. Findings The findings of the study establish that supply chain managers' structural and relational embeddedness in their reference network indeed have implications for developing supply chain resilience. Furthermore, the mediator through which managers' social embeddedness influences supply chain resilience is identified in the current study. Originality/value The study contributes to the extant literature on supply chain resilience, investigating the role that supply chain managers' social capital play in developing the resilience of their firm.
Article
Research on advice utilization often operationalizes the construct via Judge Advisor Systems (JAS), where a judge’s belief is elicited, they are provided advice, and given an opportunity to revise their belief. Belief change, or weight of advice (WOA), is measured as the shift in the judge’s belief proportional to the difference between their original belief and the advice. Several JAS studies have found WOA typically takes on a trimodal distribution, with inflation at the boundary values of 0 (indicating a judge declined advice) and 1 (adoption of advice). A dual hurdle beta model is proposed to account for these inflations. In addition to being an innovative computational model to address this methodological challenge, it also serves as a descriptive theoretical model which posits that the decision process happens in two stages: an initial discrete “choosing” stage, where the judge opts to either decline, adopt, or compromise with advice; and a subsequent continuous “averaging” stage, which occurs only if the judge opts to compromise. The approach was assessed via reanalysis of three recent JAS studies reflective of popular topics in the literature, such as algorithmic advice utilization, egocentric discounting effects, and judgmental forecasting. In each case new results were uncovered about how different correlates of advice utilization influence the decision process at either or both of the discrete and continuous stages, often in quite different ways, providing support for the descriptive theoretical model. A Bayesian graphical analysis framework is provided that can be applied to future research on advice utilization.
Article
Full-text available
The relative predictive accuracy of humans and statistical models has long been the subject of controversy even though models have demonstrated superior performance in many studies. We propose that relative performance depends on the amount of contextual information available and whether it is distributed symmetrically to humans and models. Given their different strengths, human and statistical predictions can be profitably combined to improve prediction.
Article
Three investigations are reported that examined the relation between self-appraisals and appraisals of others. In Experiment 1, subjects rated a series of valenced trait adjectives according to how well the traits described the self and others. Individuals displayed a pronounced “self-other bias,” such that positive attributes were rated as more descriptive of self than of others, whereas negative attributes were rated as less descriptive of self than of others. Furthermore, in contrast to C. R. Rogers's (1951) assertion that high self-esteem is associated with a comparable regard for others, the tendency for individuals to evaluate the self in more favorable terms than they evaluated people in general was particularly pronounced among those with high self-esteem. These findings were replicated and extended in Experiment 2, where it also was found that self-evaluations were more favorable than were evaluations of a friend and that individuals with high self-esteem were most likely to appraise their friend...
Article
The theory of cognitive dissonance suggests that opinion change is a function of a specific complex interaction between the credibility of the communicator and the discrepancy of the communication from the initial attitude of the recipient. In a laboratory experiment, Ss who read a communication that was attributed to a highly credible source showed greater opinion change when the opinion of the source was presented as being increasingly discrepant from their own. In sharp contrast to this was the behavior of Ss who were exposed to the same communication—attributed to a source having only moderate credibility. In this condition, increasing the discrepancy increased the degree of opinion change only to a point; as discrepancy became more extreme, however, the degree of opinion change decreased. The results support predictions from the theory and suggest a reconciliation of previously contradictory findings. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Why do people resist evidence that challenges the validity of long–held beliefs? And why do they persist in maladaptive behavior even when persuasive information or personal experience recommends change? We argue that such defensive tendencies are driven, in large part, by a fundamental motivation to protect the perceived worth and integrity of the self. Studies of social–political debate, health–risk assessment, and responses to team victory or defeat have shown that people respond to information in a less defensive and more open–minded manner when their self–worth is buttressed by an affirmation of an alternative source of identity. Self–affirmed individuals are more likely to accept information that they would otherwise view as threatening, and subsequently to change their beliefs and even their behavior in a desirable fashion. Defensive biases have an adaptive function for maintaining self–worth, but maladaptive consequences for promoting change and reducing social conflict.
Article
Effects of extreme versus moderate numerical anchors are investigated. Similar to past results in attitude change, three separate data collections show that extreme anchors can have less influence on judgments than more moderate anchors. Though difficult to account for using traditional “anchor-and-adjust” and recent “selective accessibility” views, the findings are consistent with theories of attitude change. Implications of an attitude-change view of numerical anchoring are discussed.