Conference PaperPDF Available

Overconfidence in IT Investment Decisions: Why Knowledge can be a Boon and Bane at the same Time.

Authors:

Abstract and Figures

Despite their strategic relevance in organizations, information technology (IT) investments still result in alarmingly high failure rates. As IT investments are such a delicate high-risk/high-reward matter, it is crucial for organizations to avoid flawed IT investment decision-making. Previous research in consumer and organizational decision-making shows that a decision's accuracy is often influenced by decision-makers' overconfidence and that the magnitude of overconfidence strongly depends on decision-makers' certainty of their knowledge. Drawing on these strands of research, our findings from a field survey (N=i66) show that IT managers' decisions in IT outsourcing are indeed affected by overconfidence. However, an in-depth investigation of three types of knowledge, namely experienced, objective and subjective knowledge, reveals that different types of knowledge can have contrasting effects on overconfidence and thus on the quality of IT outsourcing decisions. Knowledge can be a boon and bane at the same time. Implications for research and practice are discussed.
Content may be subject to copyright.
Thirty Second International Conference on Information Systems, Shanghai 2011 1
Overconfidence in IT Investment Decisions:
Why Knowledge can be a Boon and Bane at
the same Time
Completed Research Paper
Johannes Vetter
University of Munich
Institute for Information Systems
and New Media
Ludwigstr.28, 80539 Munich,
Germany
vetter@bwl.lmu.de
Alexander Benlian
University of Munich
Institute for Information Systems
and New Media
Ludwigstr.28, 80539 Munich,
Germany
benlian@bwl.lmu.de
Thomas Hess
University of Munich
Institute for Information Systems and New Media
Ludwigstr.28, 80539 Munich, Germany
thess@bwl.lmu.de
Abstract
Despite their strategic relevance in organizations, information technology (IT)
investments still result in alarmingly high failure rates. As IT investments are such a
delicate high-risk/high-reward matter, it is crucial for organizations to avoid flawed IT
investment decision-making. Previous research in consumer and organizational
decision-making shows that a decision’s accuracy is often influenced by decision-
makers’ overconfidence and that the magnitude of overconfidence strongly depends on
decision-makers’ certainty of their knowledge. Drawing on these strands of research,
our findings from a field survey (N=166) show that IT managers’ decisions in IT
outsourcing are indeed affected by overconfidence. However, an in-depth investigation
of three types of knowledge, namely experienced, objective and subjective knowledge,
reveals that different types of knowledge can have contrasting effects on overconfidence
and thus on the quality of IT outsourcing decisions. Knowledge can be a boon and bane
at the same time. Implications for research and practice are discussed.
Keywords: overconfidence, IT investment decisions, decision biases, knowledge types,
IT outsourcing, miscalibration, better-than-average effect, illusion of control
General Topics
2 Thirty Second International Conference on Information Systems, Shanghai 2011
Introduction
Since the late 1970s, organizations around the globe have made tremendous investments in information
technology (IT). World IT spending now exceeds $3 trillion annually as companies around the world
embrace IT (Gartner 2009). Yet, the relationship between IT investments and anticipated returns has
perplexed researchers over the last decades because IT investment decisions, despite their strategic
potential to increase productivity and firm performance (Sabherwa and King 1995), still show significantly
higher failure rates in comparison to other investment decisions in companies (Yeo 2002). Around 23% of
all IT projects still fail, and IT projects labeled as ‘challenged (i.e., suffering from budget overruns and/or
program slips and offering lesser functionality than originally specified) often add up to more than 50% of
all IT projects in organizations (Du et al. 2007; Yeo 2002).
Given these sobering statistics, it is of vital importance for organizations to make sound IT investment
decisions that are not misled by undue risk-taking behavior of IT executives (Benaroch et al. 2007;
Benaroch 2002). Research in organizational decision-making and behavioral science indeed shows that
decision quality is strongly influenced by the behavior of individual decision-makers or decision groups.
There is, for example, agreement in research literature that the adoption of IT outsourcing practices is a
major management decision made by individuals rather than organizations. Empirical studies have
indicated that final decisions regarding the sourcing of IT functions are mostly made by an organization’s
highest-ranking IT executive (Apte et al. 1997) and that these decisions are often based on decision
heuristics due to limitations in time, information and cognitive resources (Simon 1959). In this regard,
overconfidence has been found to be one of the most robust heuristics (McKenzie et al. 2008) serving as
an explanation for severe failures in decision-making, such as entrepreneurial failures or stock market
bubbles (Glaser and Weber 2007). The influence of overconfidence on behavior has also been shown to
vary across different personal characteristics of decision-makers. Among several different cognitive and
motivational reasons for overconfidence (Keren 1997), different types of knowledge are consistently
mentioned as a main impetus for overconfident behavior (Forbes 2005; McKenzie et al. 2008; Menkhoff
et al. 2006). As studies in IS research also indicate that IT managers often have very different knowledge
backgrounds, ranging from a strong technical focus to a rather general management focus (Enns et al.
2003; Bassellier et al. 2001), examining the impact of different knowledge types on overconfidence should
clearly be of value for IT managers and researchers alike.
Drawing on decision-making and consumer research literature, this study investigates the role of three
types of knowledge experienced knowledge, objective knowledge and subjective knowledge (Brucks
1985) in influencing overconfident behavior of IT decision-makers. To the best of our knowledge, no
research has been conducted so far that has explicitly focused on how these different types of knowledge
affect IT decision-makers’ overconfidence and thus the ability to make sound IT investment decisions.
Examining this relationship, however, has a theoretical as well as practical relevance. On the one hand,
there is no consensus in the academic literature regarding whether too much or too little knowledge leads
to a reduction of overconfidence (Skala 2008). A more nuanced perspective would thus advance our
understanding of the role of different kinds of knowledge in affecting overconfidence of IT decision-
makers. On the other hand, understanding the link between knowledge and overconfident behavior is
important for practice because “managers who fall prey to various heuristics and biases [such as
overconfidence] while making decisions […] are a major source of risk for good decisions” (Khan and
Stylianou 2009, p. 64). To understand what types of knowledge are indicative of overconfidence would
thus help practitioners to better manage overconfidence and its detrimental consequences (Russo and
Schoemaker 1992). In our study, we address two main research questions:
(1) Do IT decision-makers suffer from overconfidence or not?
(2) How are different types of knowledge related to IT decision-makers’ level of overconfidence?
The remainder of the paper is organized as follows. We begin by introducing the conceptual foundations
of this paper, including overconfidence and different types of knowledge. Then, we present our research
model and develop our hypotheses on the relationship between the different knowledge types and IT
decision-makers’ overconfidence. Further, we present the design and research methodology of the study
and report our results. After discussing the findings, the paper highlights implications for both research
and practice and points out promising areas for future research.
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 3
Conceptual Foundations
Overconfidence and Previous Studies in IS Research
The term ‘overconfidence’ has been widely used in psychology since the 1960s. Despite extensive research
on overconfidence in subsequent decades that, for example, found that overconfidence is perhaps the
most robust finding in the psychology of judgment” (DeBondt and Thaler 1995), its origins and reasons
for its existence have not been clearly and unambiguously defined (Skala 2008). Since the overconfidence
phenomenon was first considered by other fields of research in the 1970’s, including economics and
finance, the meaning of overconfidence has been stretched beyond its original definitions. Different
streams of research in behavioral research evolved an array of operationalizations that fall under the
common label of overconfidence. In recent overconfidence research studies, three phenomena have been
used to tap into overconfidence: miscalibration, better-than-average effect and illusion of control
(Deaves et al. 2010; Moore and Healy 2008).
In psychology, calibration is usually studied on the basis of general knowledge questions (e.g.,
comparisons of population sizes of different cities or their geographical position) generated by
researchers. Study participants answer sets of questions, and after each particular item (or after a set of
questions or at the end of the whole task), they have to assess the probability that the given answer (or the
whole set) was correct. Appropriate calibration takes place if over the long run, for all propositions
assigned a given probability, the proportion that is true is equal to the probability assigned” (Koriat et
al. 1980, p. 109). Putting it simply, a well-calibrated judge is able to correctly assess the amount of
mistakes he or she makes. Conversely, miscalibration refers to the difference between the accuracy rate
and the probability assigned that a given answer is correct. The better-than-average effect usually refers
to a cognitive bias that, in general, people tend to have an unrealistically positive view of themselves
(Kruger and Dunning 1999). This causes people to overestimate their positive qualities and abilities and to
underestimate their negative qualities relative to others (Dunning et al. 1989). Finally, illusion of control
is the tendency of people to overestimate their ability to control events, even in situations governed purely
by chance (Thompson 1999). Taken together, all three indicators reflect an individuals’ overestimation of
their own knowledge, which can have a serious impact on organizations, causing problems, such as stock
market bubbles, as well as entrepreneurial and project failure (Glaser and Weber 2007).
Although most research studies in the past have focused on only one of the three indicators to capture
overconfident behavior, more recent studies have advocated embracing all of them “[…] to keep these
distinctions in mind for a more thorough understanding of underlying psychological processes and
findings that directly influence the agents’ behaviour” (Skala 2008, p. 38). Hence, in keeping with more
recent overconfidence research, our paper is based on the following definition, which includes all three
indicators of overconfidence: Overconfidence is any behavior based on systematically incorrect
assessments of one’s knowledge and skills as well as the actual ability to control future events.
In previous IS research, little attention has been paid to the investigation of overconfidence. Only a few
studies have focused on overconfidence in IT decisions in general, as well as in more specialized domains
like IT outsourcing. Der Vyver (2004), for example, found that IT managers show better accuracy in
judgments and a better calibration than do accounting and marketing managers. Jamieson and Hyland
(2006) found considerable empirical evidence of bias in IT investment evaluations in organizations, with
both positive and negative impacts on decision outcomes. Rouse and Corbitt (2007) found non-rational
effects such as overconfidence to be one reason for the phenomenon that managers outsource IT,
although there is often no reliable and valid evidence for the outsourcing’s benefits. McKenzie et al.
(2008) showed that IT experts and novices are similarly overconfident, with no significant differences
between them. Based on a study of managerial biases in IT decision-making, Khan and Kumar (2009, p.
6) concluded that “at the project selection stage, overconfidence in IT managers may lead to
overestimation of their knowledge”.
Overall, while to date a few studies have been dedicated to overconfidence in IT decision-making, to the
best of our knowledge, no previous studies have empirically investigated what kinds of IT decision-
makers’ knowledge types affect overconfident behavior (Khan and Kumar 2009). Examining the
General Topics
4 Thirty Second International Conference on Information Systems, Shanghai 2011
relationship between knowledge types and overconfidence would, however, provide an advanced
understanding of important antecedents of overconfidence in IT decision-making.
Knowledge Types
Russo and Schoemaker (1992) indeed claim that having accurate knowledge is essential for making good
decisions and it has been demonstrated empirically that knowledge affects information search,
information processing and decision-making (Brucks 1985; Carlson et al. 2009). In overconfidence
literature, knowledge has also been identified as a main influencing factor for overconfidence (McKenzie
et al. 2008; Zacharakis and Shepherd 1999). Drawing on previous consumer and decision-making
research, knowledge can be divided into three main types of knowledge, namely experienced knowledge
(EK), objective knowledge (OK) and subjective knowledge (SK) (Brucks 1985; Carlson et al. 2009).
EK, which has been studied in different strands of literature (Brucks 1985; Dodd et al. 2005; Raju et al.
1995), is generally considered a summation of a subject’s past product or domain-related experience,
including knowledge about a product or domain, participation in a domain or use/ownership of a product
(Alba and Hutchinson 1987; Dodd et al. 2005). Adapted to the IT context of our study, we define EK as a
subject’s working (i.e., professional), IT-related (e.g., experience in the IT department) or domain-specific
(e.g., IT outsourcing) experience that the subject has accrued in the past. OK denotes the actual content
and organization of knowledge that is held in memory. In other words, OK refers to facts a person knows
which can be assessed using objective tests of an individual’s knowledge (Raju et al. 1995). Transferred to
the IT context, this type of knowledge refers to domain-specific knowledge about facts such as the market
shares of main IT vendors or the growth rates of interesting new IT innovations (Russo and Schoemaker
1992). SK is the perceived level of a subject’s knowledge and has typically been measured by subjects’ self-
reports of their knowledge of a product or a specific domain (e.g., Brucks 1985; Carlson et al. 2009).
Hence, SK “[…] reflects what we think we know […]” (Carlson et al. 2009, p. 864) rather than an objective
measurement of our knowledge. Transferred to the IT context, SK can be defined as subjects’ self-
assessments of their knowledge in comparison to their peers (i.e., other IT managers).
Hypothesis Development
To illustrate our hypotheses on the influence of the three types of knowledge on IT decision-makers’
overconfidence, we propose the research model shown in Figure 1.
Knowledge types
Experienced knowledge
Objective knowledge
Subjective knowledge
Overconfidence
H
1
(+)
H
2
(-)
H
3
(+)
Miscalibration
Better-than-Average effect
Illusion-of-Control
Figure 1. Research Model
Drawing on consumer and organizational decision-making research literature, we examine the effects of
three different types of knowledge on overconfidence to find out whether IT decision-makers are
overconfident or not and which type of knowledge is positively or negatively related to overconfidence in
IT investment decisions. The research model in Figure 1 suggests that the different knowledge types do
not affect overconfidence in the same way. Rather, it shows that we assume that experienced knowledge
and subjective knowledge are positively associated with overconfidence, whereas objective knowledge is
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 5
negatively related to a decision-makers’ overconfidence. In the sections that follow, we further elaborate
on how the three different knowledge types affect IT decision-makers’ overconfidence.
Experienced Knowledge and Overconfidence
Experienced knowledge is considered one of the most important variables in the research of non-rational
effects (Phillips et al. 2004). Recent research on the influence of EK on overconfidence shows some
evidence that more experienced individuals do not perform better in decision-making than less
experienced individuals (Shepherd et al. 2003). This is mainly due to the fact that individuals with higher
levels of EK overestimate the relevance of their past experience by transferring it to completely different
situations (Finkelstein et al. 2008). However, very rarely are two situations entirely similar. Misleading
experience as a contributory factor to overconfident decisions has been confirmed by several other
studies. Van de Venter and Michayluk (2008), for example, tested the forecasting ability of financial
planners in Australia. Based on a calibration study, the participants had to supply a high and a low
estimate for the S&P/ASX200 index by the end of the year. The participants were asked to choose the
numbers far enough apart to be 90% sure that the actual answer would fall somewhere in between the two
estimates. Interestingly, the two researchers found a “[…] positive correlation between overconfidence
and years’ experience” (van de Venter and Michayluk 2008, p. 554). Further, Deaves et al. (2010)
revealed “[…] that market experience does not lead to better calibration […]” (Deaves et al. 2010, p. 411).
There is, rather, a significant negative correlation that indicates that “[…] experienced and successful
market forecasters become even more overconfident over time […]” (van de Venter and Michayluk 2008,
p. 554). Given these consistent results in recent empirical research studies, we argue that experienced
knowledge will have a positive effect on IT decision-makers’ level of overconfidence. Accordingly,
H1: IT decision-makers’ experienced knowledge is positively related to their level of overconfidence.
Objective Knowledge and Overconfidence
“Charles Darwin (1871) sagely noted over a century ago, ‘Ignorance more frequently begets confidence
than does knowledge’” (Kruger and Dunning 1999, p. 1121). Taking Darwin’s statement as a guiding
hypothesis, Kruger and Dunning (1999) investigated people’s self-knowledge of whether they are
incompetent in certain domains. They argued that the skills that engender competence are often the same
skills necessary to evaluate one’s own or another’s competence in a specific domain. Hence, “[…] the same
knowledge that underlies the ability to produce correct judgments is also the knowledge that underlies
the ability to recognize correct judgments. To lack the former is to be deficient in the latter” (Kruger and
Dunning 1999, p. 1122). In their research studies, Kruger and Dunning (1999) found that people who are
objectively unskilled also “[…] suffer a dual burden: Not only do these people reach erroneous
conclusions and make unfortunate choices, but their incompetence robs them of the meta-cognitive
ability to realize it” (Kruger and Dunning 1999, p. 1121). These unskilled people significantly overestimate
their own performance by rating their performance as better than average, although their actual
performance is worse than average (Ehrlinger et al. 2008). In addition, Kruger and Dunning (1999)
observed that not only do objectively incompetent people suffer from a distorted self-assessment of
performance, but their objectively competent counterparts do as well. “Although they perform
competently, they fail to realize that their proficiency is not necessarily shared by their peers” (Kruger
and Dunning 1999, p. 1131). Kruger and Dunning (1999) found for objectively competent people that they
underestimated their own performance in comparison to the peer group, rating their performance as
below average, although their actual performance was above average.
Overall, the patterns found in the studies by Kruger and Dunning have been replicated across a wide
range of subjects in a wide range of tasks of knowledge and skills (Ehrlinger and Dunning 2003). For
example, people incorrectly perceive how well they have actually conveyed their feelings, doctors
inaccurately estimate their level of knowledge of illnesses, and nurses erroneously evaluate their life
support skills (Hodges et al. 2001; Tracey et al. 1997; Marteau et al. 1989; Riggio et al. 1985). People are
also incorrect in their confidence estimates of the accuracy of their actions, such as their judgments of
whether someone is lying (DePaulo et al. 1997) and their eyewitness identifications in a lineup (Sporer et
al. 1995). This inability to accurately estimate one’s own performance becomes even more pronounced in
situations of excessive cognitive load, as is most often the case when making high-stake decisions in
General Topics
6 Thirty Second International Conference on Information Systems, Shanghai 2011
organizations (Roch et al. 2000). Based on this previous empirical evidence, we argue that IT managers
also suffer from this pattern of incorrect self-assessment of performance in which objective knowledge is
negatively associated with overconfidence. Hence, we derive
H2: IT decision-makers’ objective knowledge is negatively related to their level of overconfidence.
Subjective Knowledge and Overconfidence
In contrast to objectively measurable knowledge metrics, subjective knowledge refers to people’s self-
assessment of their own knowledge. This kind of knowledge has previously been associated with
overconfidence, “[…] defined as an expectancy of a personal success probability inappropriately higher
than the objective probability would warrant. It [can be] predicted that factors from skill situations […]
introduced into chance situations would cause [participants] to feel inappropriately confident.” (Langer
1975, p. 312). In other words, as psychological research demonstrates, people tend to believe they are able
to influence events that in fact are governed mainly, or purely, by chance (Taylor and Brown 1988). This
so-called ‘illusion of control’ thus refers to an overestimation of a subject’s ability to cope with and predict
future events (Simon et al. 1999). An extreme example of this illusion would be an insistence on throwing
dice personally, in the belief that this would lead to a more favorable result. Moreover, if people expect
certain outcomes and these outcomes do occur, the participants are prone to assign the outcome to their
skill rather than luck and thus re-affirm their belief in control over a situation where the only factor is
probability (Langer 1975). Managers who suffer from this specific kind of overconfidence are dangerous to
firms as they tend to generate overoptimistic performance estimates and, therefore, make excessively
risky decisions (Barnes 1984). Illusion of control has been found in a variation of experiments on chance-
driven tasks, including the participation of a confident or a nervous competitor, choosing lottery tickets or
being assigned one, engaging in familiar or unfamiliar lotteries or chance games and making one’s own
guesses or guessing through a proxy (Langer 1975). In all these situations, participants have been found to
express excessive confidence in their control over outcomes of chance-driven tasks. A meta-analysis of
Presson and Benassi (1996) also documents the prevalence of illusion-of-control effects across a wide
range of studies and experimental variations. Previous research on the effects of subjective knowledge has
shown that people with a high degree of subjective knowledge often do not want to involve or consult
others in their own decision-making. Rather, they base their decisions on their self-assessed knowledge
and, as a consequence, suffer from high illusion of control (Gino et al. 2011; Mattila and Wirtz 2002). In
line with previous studies examining people’s illusion of control in decision-making, we thus argue that IT
decision-makers who overestimate their own subjective knowledge also suffer from overconfidence, as
indicated by high illusion of control. As such, we hypothesize that
H3: IT decision-makers’ subjective knowledge is positively related to their level of overconfidence.
Research Methods
Research Context, Procedures and Descriptives
We decided to base our research study on a specific IT investment decision that has gained continued
popularity in recent years, namely IT outsourcing (ITO). ITO, the transferring of all or part of a company’s
IT functions to an outside party, plays a major role in the strategic arsenal of today’s organizations
(Benamati and Rajkumar 2002; Gorla and Mei Bik 2010). The ITO market accounts for 67% of all global
outsourcing deals; the outsourcing industry grows rapidly, and firms across all industries and sizes
outsource or consider ITO as an alternative to their internal IT functions (Benamati and Rajkumar 2002).
Due to our focus on IT (i.e., ITO) decisions, only professionals working in the IT department of a company
were addressed in our study. These professionals should have sufficient decision responsibility in their
respective domain and, thus, should predominantly derive from middle and upper IT management. Based
on a representative random sample extracted from the Hoppenstedt firm database, which is one of the
largest commercial business data providers in Germany and contains over 300,000 company profiles,
400 IT managers were invited to an online survey. As incentives, we guaranteed response anonymity and
also offered a free management report presenting the main results of our study. Out of 181 total
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 7
respondents, 15 had to be eliminated due to missing or inconsistent data, resulting in a net sample size of
166 (see further descriptive statistics on the survey respondents in Table 1).
Table 1. Descriptive Statistics on the Survey Respondents
Variable Responses (in %)
Age (years) 18-24: 2.4 25-39: 27.7 40-54: 54.2 55-69: 15.1 >70: .6
Gender Male: 98.8 Female:
1.2
Hierarchy Level Lower Mgmt.:
6.6 Middle Mgmt.: 66.9 Upper Mgmt.:
26.5
Professional Experience (years) <10: 10.8 10-20: 36.2 21-30: 34.9 31-40: 14.5
>40: 3.6
Experience in IT Departments (years) <10: 22.3 10-20: 48.2 21-30: 23.5 31-40: 5.4 >40: .6
ITO Experience (Decisions involved in) <4: 55.4 4-10: 33.2 11-20: 8.4 >20: 3.0
Four weeks after the first email, we sent out a reminder email to all who had not yet answered. Non-
response bias was assessed by verifying that responses received before and after the reminder email were
not significantly different and by verifying that early (first 50) and late (last 50) respondents were not
significantly different (Armstrong and Overton 1977). We compared the two samples based on their socio-
demographics and responses to principal constructs. T-tests showed no significant differences (p>0.05),
indicating that non-response bias was not a pervasive threat.
In quantitative mono-method research especially, common method variance (CMV) is a possible hazard,
which we tried to address by following some suggestions from Sharma et al. (2009). For example, we used
a number of different approaches (e.g., a calibration study and Likert scales) to capture both behavioral
and self-reported measures for our independent and dependent variables. Moreover, we used items that
measured factual and verifiable behaviors to keep items as concrete as possible. Finally, we guaranteed
response anonymity as a procedural remedy to attempt to reduce method bias.
Measurement of Constructs
Drawing on previous studies in overconfidence and consumer research, we used miscalibration, the
better-than-average effect and illusion of control to tap into overconfidence (Deaves et al. 2010; Moore
and Healy 2008) and experienced, objective and subjective knowledge (Brucks 1985; Carlson et al. 2009)
to capture the different knowledge types investigated in this study (see Tables 1 to 4 in the Appendix that
includes the questions of the online survey).
Measurements of Overconfidence
Miscalibration is probably the most established operationalization of overconfidence and is usually
measured within a so-called calibration study. During those studies, participants are asked a number of
knowledge questions and about a single, numerical best estimate within each question (e.g., how many
people live in the USA?). The participant then has to provide an interval corresponding to a given level of
confidence, e.g., stating a high and a low estimate such that there is a X% chance that the correct answer
falls somewhere within these limits (Klayman et al. 1999). The accuracy of such estimates is usually
measured in terms of a hit rate that refers to how often the intervals provided by subjects contain the true
value (McKenzie et al. 2008). Hit rates are often compared to the degree of confidence reported in the
intervals. Van de Venter and Michayluk (2008) show, for example, that miscalibration can range from
very high (hit rate=22% within a 90% confidence interval) to relatively low (hit rate =80% within a 95%
confidence interval). Calibration studies are not usually seen as a quiz that is conducted to figure out what
the participants do or do not know. They are rather conducted to measure how well participants are aware
of what they do not know exactly.
Our study’s survey also included calibration tasks as a way of measuring overconfidence, asking the
respondents to answer five knowledge questions. Based on the results of former studies, five questions is a
sufficient number to get a reliable classification of participants based on their degree of miscalibration
(Glaser and Weber 2007; Russo and Schoemaker 1992). Each question had one correct numerical answer.
For each question, respondents had to provide a low and a high estimate that they were 90% certain to
General Topics
8 Thirty Second International Conference on Information Systems, Shanghai 2011
capture the correct answer. Consistent with past research (McKenzie et al. 2008), the respondents were
asked about domain-specific knowledge concerning ITO (e.g., What percentage of a firm’s total IT budget
is expended on IT outsourcing?, or What percentage of German firms outsources their data centers?). The
questions were extracted from two current surveys on the European ITO market conducted by Orange
Business Services (Orange Business Services 2009) and PricewaterhouseCoopers (Messerschmidt et al.
2008). Based on the five knowledge questions, we computed the hit rate for each participant. We also
calculated two other measures that are important for the interpretation of the hit rate, providing insights
into why hit rates are relatively high or low. First, we subtracted the low estimate from the high estimate
for each question and participant and averaged the interval sizes across the questions. This gave us an
average interval size (or width) for each participant depicting the subject’s level of uncertainty. Second,
the so-called interval error, i.e., the absolute value of the difference between the correct answer (i.e., the
true value) and the midpoint of the interval, was calculated for each question and participant and also
averaged across the questions. This absolute value should gauge subjects’ objective knowledge, as the
interval error shows how well a participant can estimate the correct answer independently of the hit rate
(Yaniv and Foster 1997). To avoid confounding effects from revealing the correct answers to the
knowledge questions, the correct answers were presented to the participants only after they had
submitted the survey.
A better-than-average effect occurs when people believe that they perform better than others in a certain
peer group. According to Kruger et al. (1999), the better-than-average effect is thus usually measured by
comparing people’s self-assessment of performance (relative to others) to their actual performance. Based
on the operationalization of Kruger et al. (1999), we operationalized the better-than-average effect in a
similar way. We asked the participants after answering the knowledge questions to guess how they
performed relative to other participants by indicating the percentage of their peers (i.e., other IT decision-
makers) that they think they have outperformed in the calibration study in terms of participants’ interval
error. This value was then compared to the participants’ actual performance (i.e., actual interval error)
derived from the calibration study. To arrive at an analysis similar to Kruger and Dunning’s analyses, we
normalized the actual performance (i.e., the interval error) on 100% (Kruger and Dunning 1999).
To get a reliable measure for illusion of control, we adopted four items from previous overconfidence
studies, with minor changes to the wording (Burger and Cooper 1979; Menkhoff et al. 2006). The
respondents were asked to answer ITO-specific questions measuring their control position (e.g., Most of
the news on ITO is not surprising to me; When a service provider does not meet the requirements as
arranged, it is not surprising to me). To arrive at a single score for the control position of IT decision-
makers, we averaged these items.
Measurement of Knowledge Types
In line with empirical studies on overconfidence and knowledge research (e.g., Forbes 2005; McKenzie et
al. 2008; Menkhoff et al. 2006; van de Venter and Michayluk 2008), we measured experienced
knowledge with several measures covering different aspects of decision-makers’ experience. Overall
professional experience and working experience in IT departments were measured by the number of years
of professional/IT experience. As a more domain-specific type of experienced knowledge, IT outsourcing
experience was measured by the number of ITO decisions that IT managers have been involved with in
their careers. Finally, we measured the hierarchy level of the participants in their respective organization
by distinguishing between lower, middle and upper management (see also Table 1).
Objective knowledge “[…] has generally been assessed using objective tests of an individual’s extent of
knowledge about a domain” (Raju et al. 1995, p. 154). As reported above and in line with previous
calibration studies (e.g., Dodd et al. 2005; Raju et al. 1995), we used the actual interval errors from the
calibration study as measures for OK indicating the precision of a participant’s answer. Subjective
knowledge is generally assessed by subjects’ self-reports of their skills, performance, and past successes in
a specific domain compared to their peers’ skills, performance and past successes (Brucks 1985; Raju et al.
1995). To cover these three aspects of subjective knowledge, we drew on three measures suggested in
previous empirical studies capturing participants’ self-assessments related to their own skills,
performance and past ITO successes in comparison to their peers (Larrick et al. 2007).
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 9
Results
Experienced Knowledge and Overconfidence
Hypothesis 1 predicted that there is a positive relation between experienced knowledge and
overconfidence. Table 2 depicts the correlations between the different facets of EK (i.e., (1) hierarchy level,
(2) professional experience, (3) experience in IT departments, and (4) ITO experience) and the hit rate we
derived from our calibration study.
Table 2. Correlations between the Experienced Knowledge and Hit rate
Hit Rate 1 2 3 4
Hit Rate 1.00
1. Hierarchy Level .162* 1.00
2. Professional Experience -.265** .154* 1.00
3. Experience in IT Departments -.179* .007 .706** 1.00
4. ITO Experience .129* .098 .200** .238** 1.00
*p<0.05; **p<0.01
As this research study is still in an early stage of theorizing, we used a correlation analysis instead of more
sophisticated statistical tests (e.g., regression analysis) to examine the relationship between experienced
knowledge and overconfidence. Overall, although the correlation coefficients are rather low in absolute
terms, the data indicate significant correlations between the hit rate and all EK variables. Hierarchy level
(r = .162; p < .05) and ITO experience (r = .129; p < .05) show positive correlations with hit rate. Total
professional experience (r = -.265; p < .01) and experience in IT departments (r = -.179; p < .01) are
negatively (and significantly) related to hit rate, which shows that IT decision-makers with more work
and IT experience are worse calibrated than inexperienced decision-makers. Only a higher hierarchical
level and the involvement in ITO decisions lower the miscalibration by ITO decision makers. At first view,
this result seems to be counterintuitive. Why should professional and IT experience make decision-
making less calibrated? To find an answer to this question, one has to consider that the hit rate in
calibration studies can be increased in two ways. McKenzie et al. (2008) argued that the participant either
has more objective knowledge, and therefore produces smaller interval errors, or has less objective
knowledge and chooses a larger interval size, which raises the chance of encircling the correct answer to
get a hit. Consistent with Russo and Schoemaker (1992), decision-makers who do so know that they do
not know the exact answer and therefore have a higher level of meta-knowledge, which refers to a person’s
knowledge about the borders of his or her knowledge. As illustrated in Table 3, both effects can be shown
in our study.
Table 3. Correlations between the Levels of Experience, Error and Interval Size
Interval Error Interval Size
1 2 3 4
Interval Error 1.00
Interval Size -.427** 1.00
1. Hierarchy Level -.144* .023 1.00
2. Professional Experience .164* -.197** .154* 1.00
3. Experience in IT Departments .168* -.090 .007 .706** 1.00
4. ITO Experience -.091 .147* .098 .200** .238** 1.00
*p<0.05; **p<0.01
IT managers that are on a higher hierarchical level (r = -.144; p < .05) or have more experience in ITO
decisions (r = -.091; n.s.) show smaller interval errors in the calibration study. But they also broaden their
General Topics
10 Thirty Second International Conference on Information Systems, Shanghai 2011
intervals. This is especially the case for participants with more ITO experience (r = .147; p < .05). They are
seemingly more aware of the complexity of the market. Due to this meta-knowledge, they are able to
better assess their own objective knowledge. By contrast, participants with more professional experience
(r = .164; p < .05) or more experience in IT departments (r = .168; p < .05) show larger interval errors.
That means they have less objective knowledge. Furthermore, IT decision-makers with high professional
experience in particular lower the interval size (r = -.197; p < .01). Taken together, H1 can only be
partially supported for some dimensions of experience. ITO experience and the level of hierarchy are
negatively related to overconfidence, while professional experience and experience in IT departments are
positively associated with overconfidence.
Objective Knowledge and Overconfidence
Are IT managers who do well in a certain task aware of their competence, and are bad ones aware of their
incompetence? To answer this question we adopted a well elaborated test which was developed by Kruger
and Dunning (1999). They posed the question, if people who do (bad) good in a certain task are aware of
their (in-) competence? To aswer this question, they assigned each participant a percentile rank based on
their actual performance quartiles and plotted those ranks against the percentiles of their self-reported
performance relative to peers (see also Ehrlinger et al. 2008 as an example for the used test). We adopted
this procedure by using the interval error as actual performance indicator. On average, our participants
put their performance in ITO decision making in the 58th percentile, which exceeded the actual mean
percentile (50, by definition) by eight percentage points (one-sample t(165) = 8.755, p < .0001). As Figure
2 shows, the participants in the bottom quartile grossly overestimated their performance relative to their
peers. While their actual performance in the calibration study fell in the 34th percentile, they (on average)
put themselves in the 58th percentile. These self-reported estimates were not only significantly higher than
the ranking they actually achieved (paired t(40) = 9.008, p < .0001) but also significantly exceeded the
actual mean percentile (one-sample t(40) = 4.128, p < .0001). As such, participants in the bottom quartile
of the distribution rated themselves as better than average. Interestingly, participants in other quartiles
did not overestimate their own performance. Participants in the second quartile, for example, assessed
their own performance correctly. Those in the quartiles above the median significantly underestimated
their performance relative to their peers (paired t3rd quartile(41) = -6.786, p < .0001 and t4th quartile(40) = -
14.632, p < .0001). Based on these findings, hypothesis 2 can be supported, as overconfidence (as
indicated by the better-than-average effect) decreases with increasing objective knowledge.
90.00
80.00
70.00
60.00
50.00
40.00
30.00
20.00
Bottom Quartile 2
nd
Quartile 3
rd
Quartile Top Quartile
Objective Performance Quartile (Interval Error)
Percentile
Self-reported knowledge/performance
relative to peers
Objective knowledge/performance
Figure 2. Self-Assessment of Performance in ITO as a Function of Actual Performance
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 11
Subjective Knowledge and Overconfidence
Based on our third hypothesis, we proposed that people with higher subjective knowledge also show
higher overconfidence in terms of illusion of control. Based on the self-assessment variables on skills,
performance and success, we divided our sample into two groups for each variable. Group 0 consisted of
participants who assessed their skills, performance or success as below-average or average, whereas group
1 consisted of participants who assessed themselves as above-average in the three given dimensions. Due
to the non-normality of our data, we conducted nonparametric Mann-Whitney-U-tests. Table 4 shows
that for all three dimensions, participants who rated themselves as above-average also showed
significantly higher illusion of control in terms of their control position (p ≤ .01). These preliminary
results support our hypothesis 3.
Table 4. Rank and Test Statistics for Illusion of Control
Grouping Variable N=166
Mean Ranks and Deltas
Skills Control Position
|∆|
Below Average or Average (0) 97 75.08
Above Average (1) 69 95.33 20.25
Mann-Whitney-U Asymp.Sig.(2-tailed)
p = .007
Performance Control Position
|∆|
Below Average or Average (0) 76 72.81
Above Average (1) 90 92.53 19.72
Mann-Whitney-U Asymp.Sig.(2-tailed)
p = .008
Success Control Position
|∆|
Below Average or Average (0) 78 73.38
Above Average (1) 88 92.47 19.09
Mann-Whitney-U Asymp.Sig.(2-tailed) p = .010
Taken together, the results of this preliminary study on the relations between knowledge types and
overconfidence provided a mixed picture (see Table 5). In terms of experienced knowledge, we found that
hierarchy level and domain-related (ITO) experience were negatively related to overconfidence, while
professional experience and general IT experience (in IT departments) were positively related to
overconfidence. Finally, objective knowledge could be confirmed to be negatively related to
overconfidence, whereas subjective knowledge was positively related to overconfidence.
Table 5. Summary of Results
Knowledge Type Impact on Overconfidence
Experienced Knowledge
Hierachy Level Negative
Professional Experience Positive
Experience in IT Departments
Positive
ITO Experience Negative
Objective Knowledge Negative
Subjective Knowledge Positive
Discussion
In this preliminary study, we investigated the relationship between different knowledge types and
overconfidence in IT investment decisions. In doing so, we aimed to find out whether IT decision makers
suffer from overconfidence and which kinds of knowledge are positively or negatively related to
overconfident behavior. We used IT outsourcing decisions as a surrogate for IT investment decisions,
General Topics
12 Thirty Second International Conference on Information Systems, Shanghai 2011
since these decisions play a major role in the strategic arsenal of today’s organizations (Benamati and
Rajkumar 2002; Gorla and Mei Bik 2010). By showing that different knowledge types seem to
significantly affect IT managers’ overconfidence in their decision-making behavior, a theoretical
contribution of this study is a first step towards an advancement of our understanding of decision-making
distortions in the context of IT investments. As we show, IT decision makers’ knowledge does not reduce
overconfident behavior per se. Knowledge can be a boon and a bane at the same time because it can
simultaneously have beneficial and detrimental effects on overconfidence. In the following, the practical
and theoretical implications of this preliminary study are presented according to the three different types
of knowledge that may function as important indicators of overconfidence.
We showed that only specific types of experienced knowledge, in our case previous IT outsourcing
experience and higher hierarchical levels, are positively related to hit rates which in turn indicate lower
overconfidence. These subjects are more aware than others of the limits of their knowledge. Hence, they
broaden their intervals to heighten their hit rates. Professional or IT experience in general does not lower
overconfidence. Rather, subjects with a lot of IT and professional experience seem to overestimate their
knowledge in IT-relevant topics like ITO. Due to this illusory knowledge, they scale down their interval
size, which leads to lower hit rates. Our findings, therefore, only partially confirm former overconfidence
studies in the context of IT investment decisions. McKenzie et al. (2008), for example, showed that
experts generally narrow their interval size which also reduces their hit rate. Based on our more nuanced
definition of experience, we believe that we can show that this is only the case for rather nonspecific types
of experience, such as professional experience or general IT experience. Only very domain-specific types
of knowledge (in our study ITO knowledge) or a high position (i.e., hierarchy level) in the (IT)
organization seem to ensure that IT managers are able to reduce the distorting effects of overconfidence
by broadening their intervals to raise their chance of getting a hit. In practice, these results may help
staffing managers in their hiring processes to better gauge job applicants’ level of overconfidence.
Furthermore, the composition and arrangement of teams that regularly make important IT decisions in
their companies can be informed by our findings. Team members with a lot of work and/or IT experience
are not automatically qualified to make proper IT decisions in specialized fields like ITO. Hence, if there is
no real domain expert in a team, it would not seem unreasonable to seek advice from external consultants
with domain-specific expertise.
The objective knowledge of IT managers was found to be a double-edged sword in terms of affecting
overconfidence. As hypothesized, IT managers with very low levels of objective knowledge indeed
demonstrated high levels of overconfidence and thus inflated self-assessments as indicated by a
considerably strong over-estimation of their performance (i.e., better-than-average effect). With growing
objective knowledge, however, IT managers reduced the gap between self-assessments and their actual
performance and thus better estimated their own abilities. Further, IT managers with high objective
knowledge strongly underestimated their actual performance and thus did not demonstrate
overconfidence. According to psychological literature, they fall prey to the so-called false-consensus effect
(Ross et al. 1977). Simply put, these participants assume that because they performed so well, their peers
must have performed well likewise. This would have led top-quartile participants to underestimate their
comparative abilities (i.e., how their self-reported knowledge compares with that of their peers), but not
their absolute knowledge (i.e., interval error). For practitioners, the results show that a considerable share
of IT decision-makers is not aware of the fact that their actual decision-making abilities are worse than
their own self-assessments. Overly hasty or misleading decisions based on wrong estimates or
assumptions may be the result. On the other hand, as we have seen from our results, very competent IT
decision-makers who perform better than the average underestimate their abilities considerably. This
could result in unnecessary caution about making a decision leading to tentative decision-making, longer
decision processes and/or lost opportunities. In either case, IT managers should regularly get
performance feedback from various sources (e.g., superiors, peers, external consultants) and engage in
group debates to expose them to new experiences and viewpoints and ultimately bring self-assessment of
their knowledge and actual knowledge into balance.
In terms of subjective knowledge, we found support for our hypothesis that subjective knowledge of IT
decision-makers is positively related to their overconfidence. IT managers who view themselves as above-
average in their skills, performance and decision success rate depicted a significantly higher (illusory)
control position than IT managers who consider themselves as below-average. They seem to think they
can control future events or believe that events in the past were foreseeable from the beginning. However,
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 13
this illusory control position can result in overly risky decisions. IT decision-makers that show high
subjective knowledge thus have to be careful that they do not suffer from illusory control over future
events due to their experience. To mitigate the risks of illusion of control and develop a sharper sense of
how much they really can control, decision-makers should be exposed to accurate, timely, and precise
feedback (e.g., by superiors, colleagues or external experts) that introduces a safeguard against their own
self-assessments. In addition, appropriate measures of holding decision-makers (and their supporting
teams) accountable for the consequences of their decisions may force them to confront the feedback,
recalibrate their perceptions about their own knowledge, and temper their (over-)confidence accordingly.
Limitations, Future Research and Conclusion
This study is subject to several limitations. First, our study is based on a cross-sectional design, which is
limited to a single point of time. It was therefore not possible to observe a change in overconfident
behavior over time. Future studies could examine overconfidence in a longitudinal setting. In
combination with an in-depth analysis of the different knowledge types this could, for example, lead to a
more thorough understanding of how overconfidence triggered by knowledge gaps affect real decisions.
Second, our survey included just a small set of questions, which was limited due to time constraints,
connected to survey studies with high-level decision makers in companies. Nevertheless, the number and
types of questions were carefully selected based on previous overconfidence studies and tested for
reliability and validity. Future research could, however, attempt to include more questions to investigate
the impact of different knowledge types on miscalibration, the better-than-average effect and illusion of
control. In addition, more sophisticated statistical tests should be conducted to analyze not only
relationships but also the causality between the knowledge and overconfidence variables. Third, the study
used IT outsourcing decisions as surrogate for IS investment decisions. In further studies other examples
of IS investment decision like enterprise software purchase decisions (e.g., ERP-Systems) or enterprise
architecture decisions (e.g., ITIL adoption) should be used to increase the generality of our findings.
Finally, our study focused on just one prominent type of cognitive bias in IT decision-making. As previous
research in psychology and sociology has shown, several other (e.g., cognitive, social) biases (e.g., framing,
confirmation or groupthink biases) can hamper sound decision-making (Finkelstein et al. 2008).
Although IS researchers have already begun to include non-rational biases into their research models
(e.g., Cheng and Wu 2010; Kim and Kankanhalli 2009; Iacovou et al. 2009), we feel that there are plenty
of biases that have not been addressed in the context of IT decision-making but would advance our
understanding of important IT-related phenomena.
In conclusion, IT managers in organizations need to become more aware of the role of different
knowledge types and their influence on overconfidence in decision-making. Today’s complex, volatile, and
fast-paced business and technological environment is placing extraordinary stress on IT decision-makers.
Unfortunately, in the quest to be ever more efficient and productive, IT managers often become focused
largely on fast and frugal task performance. As a byproduct, they often fail to reflect on the consequences
of unsupportable confidence in their decisions and the possible factors triggering their overconfidence. To
overlook or ignore overconfidence and negative effects, however, means that an IT department or even
the entire organization may pay a significant price when it comes to make sound IT investment decisions.
References
Alba, J.W., and Hutchinson, J.W. 1987. "Dimensions of Consumer Expertise," The Journal of Consumer
Research (13:4), pp. 411-454.
Apte, U., Sobol, M., Hanaoka, S., Shimada, T., Saarinen, T., Salmela, T., and Vepsalainen, A. 1997. "Is
Outsourcing Practices in the USA, Japan and Finland: A Comparative Study," Journal of
Information Technology (12:4), pp. 289-304.
Armstrong, J.S., and Overton, T.S. 1977. "Estimating Nonresponse Bias in Mail Surveys," Journal of
Marketing Research (14:3), pp. 396-402.
Barnes, J.H. 1984. "Cognitive Biases and Their Impact on Strategic Planning," Strategic Management
Journal (5:2), pp. 129-137.
General Topics
14 Thirty Second International Conference on Information Systems, Shanghai 2011
Bassellier, G., Reich, B.H., and Benbasat, I. 2001. "Information Technology Competence of Business
Managers: A Definition and Research Model," Journal of Management Information Systems
(17:4), Spring2001, pp. 159-182.
Benamati, J., and Rajkumar, T.M. 2002. "The Application Development Outsourcing Decision: An
Application of the Technology Acceptance Model," Journal of Computer Information Systems
(42:4), p. 35.
Benaroch, M. 2002. "Managing Information Technology Investment Risk: A Real Options Perspective,"
Journal of Management Information Systems (19:2), Fall2002, pp. 43-84.
Benaroch, M., Jeffery, M., Kauffman, R.J., and Shah, S. 2007. "Option-Based Risk Management: A Field
Study of Sequential Information Technology Investment Decisions," Journal of Management
Information Systems (24:2), Fall2007, pp. 103-140.
Brucks, M. 1985. "The Effects of Product Class Knowledge on Information Search Behavior," Journal of
Consumer Research (12), pp. 1-16.
Burger, J., and Cooper, H. 1979. "The Desirability of Control," Motivation and Emotion, (3:4), pp. 381-
393.
Carlson, J.P., Vincent, L.H., Hardesty, D.M., and Bearden, W.O. 2009. "Objective and Subjective
Knowledge Relationships: A Quantitative Analysis of Consumer Research Findings," Journal of
Consumer Research (35:5), pp. 864-876.
Cheng, F.-F., and Wu, C.-S. 2010. "Debiasing the Framing Effect: The Effect of Warning and
Involvement," Decision Support Systems (49:3), pp. 328-334.
Darwin, C. 1871. The Descent of Man. London: John Murray.
Deaves, R., Lüders, E., and Schröder, M. 2010. "The Dynamics of Overconfidence: Evidence from Stock
Market Forecasters," Journal of Economic Behavior & Organization (75), pp. 402-412.
DeBondt, W., and Thaler, R. 1995. "Financial Decision Making in Markets and Firms: A Behavioral
Perspective " Handbooks in Operational Research and Management Science (9), pp. 385-410.
DePaulo, B.M., Charlton, K., Cooper, H., Lindsay, J.J., and Muhlenbruck, L. 1997. "The Accuracy-
Confidence Correlation in the Detection of Deception," Personality and Social Psychology
Review (4:1), pp. 346-357.
Der Vyver, G. 2004. "The Overconfidence Effect and It Professionals," European Conference on
Information Systems, pp. 163-175.
Dodd, T., Laverie, D., Wilcox, J., and Duhan, D. 2005. "Differential Effects of Experience, Subjective
Knowledge, and Objective Knowledge on Sources of Information Used in Consumer Wine
Purchasing," Journal of Hospitality & Tourism Research (29:1), pp. 3-19.
Du, S., Keil, M., Mathiassen, L., Shen, Y., and Tiwana, A. 2007. "Attention-Shaping Tools, Expertise, and
Perceived Control in It Project Risk Assessment," Decision Support Systems (43:1), pp. 269-283.
Dunning, D., Meyerowitz, J.A., and Holzberg, A.D. 1989. "Ambiguity and Self-Evaluation: The Role of
Idiosyncratic Trait Definitions in Self-Serving Assessments of Ability," Journal of Personality and
Social Psychology (57:6), pp. 1082-1090.
Ehrlinger, J., and Dunning, D. 2003. "How Chronic Self-Views Influence (and Potentially Mislead)
Estimates of Performance," Journal of Personality and Social Psychology (84:1), pp. 5-17.
Ehrlinger, J., Johnson, K., Banner, M., Dunning, D., and Kruger, J. 2008. "Why the Unskilled Are
Unaware: Further Explorations of (Absent) Self-Insight among the Incompetent," Organizational
Behavior and Human Decision Processes (105), pp. 98-121.
Enns, H.G., Huff, S.L., and Golden, B.R. 2003. "Cio Influence Behaviors: The Impact of Technical
Background," Information & Management (40:5), pp. 467-485.
Finkelstein, S., Whitehead, J., and Campbell, A. 2008. Think Again: Why Good Leaders Make Bad
Decisions and How to Keep It from Happening to You. Boston, MA: Harvard Business Press.
Forbes, D.P. 2005. "Are Some Entrepreneurs More Overconfident Than Others?," Journal of Business
Venturing (20:5), pp. 623-640.
Gartner. 2009. "Gartner Says It Spending to Rebound in 2010 with 3.3 Percent Growth after Worst Year
Ever in 2009," in: http://www.gartner.com/it/page.jsp?id=1209913. Stamford, Connecticut.
Gino, F., Sharek, Z., and Moore, D.A. 2011. "Keeping the Illusion of Control under Control: Ceilings,
Floors, and Imperfect Calibration," Organizational Behavior and Human Decision Processes
(114:2), pp. 104-114.
Glaser, M., and Weber, M. 2007. "Overconfidence and Trading Volume," Geneva Risk & Insurance
Review (32:1), pp. 1-36.
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 15
Gorla, N., and Mei Bik, L. 2010. "Will Negative Experiences Impact Future It Outsourcing?," Journal of
Computer Information Systems (50:3), pp. 91-101.
Hodges, B., Regher, G., and Martin, D. 2001. "Difficulties in Recognizing One's Own Incompetence:
Novice Physicians Who Are Unskilled and Unaware of It," Academic Medicine (76:10), pp. 87-89.
Iacovou, C.L., Thompson, R.L., and Smith, H.J. 2009. "Selective Status Reporting in Information Systems
Projects: A Dyadic-Level Investigation," MIS Quarterly (33:4), pp. 785-A785.
Jamieson, K., and Hyland, P. 2006. "Good Intuition or Fear and Uncertainty:The Effects of Bias on
Information Systems Selection Decisions," Informing Science Journal (9), pp. 49-69.
Keren, G. 1997. "On the Calibration of Probability Judgments: Some Critical Comments and Alternative
Perspectives," Journal of Behavioral Decision Making (10:3), pp. 269-278.
Khan, S., and Kumar, R. 2009. "Understanding Managerial Decision Risks in It Project Management: An
Integrated Behavioral Decision Analysis Perspective," American Conference on Information
Systems, pp. 120-131.
Khan, S., and Stylianou, A. 2009. "It Project Management & Managerial Risk: Effects of Overconfidence,"
4th International Research Workshop on Information Technology Project Management, pp. 63-
73.
Kim, H.-W., and Kankanhalli, A. 2009. "Investigating User Resistance to Information Systems
Implementation: A Status Quo Bias Perspective," MIS Quarterly (33:3), pp. 567-582.
Klayman, J., Soll, J.B., González-Vallejo, C., and Barlas, S. 1999. "Overconfidence: It Depends on How,
What, and Whom You Ask," Organizational Behavior and Human Decision Processes (79:3), pp.
216-247.
Koriat, A., Lichtenstein, S., and Fischhoff, B. 1980. "Reasons for Confidence," Journal of Experimental
Psychology: Human Learning and Memory (6:2), pp. 107-118.
Kruger, J., and Dunning, D. 1999. "Unskilled and Unaware of It: How Difficulties in Recognizing One’s
Own Incompetence Lead to Inflated Self-Assessments," Journal of Personality and Social
Psychology (77:6), pp. 1121-1134.
Langer, E. 1975. "The Illusion of Control," Journal of Personality and Social Psychology (32:2), pp. 311-
328.
Larrick, R., Burson, K., and Soll, J. 2007. "Social Comparison and Confidence: When Thinking You’re
Better Than Average Predicts Overconfidence (and When It Does Not)," Organizational Behavior
and Human Decision Processes (102:1), pp. 76-94.
Marteau, T.M., Johnston, M.W., G., and Evans, T.R. 1989. "Cognitive Factors in the Explanation of the
Mismatch between Confidence and Competence in Performing Basic Life Support," Psychology
and Health (3), pp. 173-182.
Mattila, A., and Wirtz, J. 2002. "The Impact of Knowledge Types on the Consumer Search Process: An
Investigation in the Context of Credence Services," International Journal of Service Industry
Management (13:3), pp. 214-230.
McKenzie, C.R.M., Liersch, M.J., and Yaniv, I. 2008. "Overconfidence in Interval Estimates: What Does
Expertise Buy You?," Organizational Behavior & Human Decision Processes (107:2), pp. 179-191.
Menkhoff, L., Schmidt, U., and Brozynski, T. 2006. "The Impact of Experience on Risk Taking,
Overconfidence, and Herding of Fund Managers: Complementary Survey Evidence," European
Economic Review (50:7), pp. 1753-1766.
Messerschmidt, M., Schülein, P., and Murnleitner, M. 2008. Der Wertbeitrag Der It Zum
Unternehmenserfolg. Stuttgart: PricewaterhouseCoopers AG Wirtschaftsprüfungsgesellschaft.
Moore, D., and Healy, P. 2008. "The Trouble with Overconfidence," Psychological Review (115:2), pp.
502-517.
Orange Business Services. 2009. Cxo Survey 2009 - Results: Outsourcing Services. Paris: Orange
Business Services.
Phillips, J., Klein, G., and Sieck, W. 2004. "Expertise in Judgement and Decision Making: A Case for
Training Intuitive Decision Skills," in Blackwell Handbook of Judgement and Decision-Making,
Blackwell, Oxford, D.J. Koehler and N. Harvey (eds.). London: Blackwell, pp. 297-315.
Presson, P.K., and Benassi, V.A. 1996. "Illusion of Control: A Meta-Analytic Review," Journal of Social
Behavior & Personality (11:3), pp. 493-510.
Raju, P., Lonial, S., and Mangold, W. 1995. "Differential Effects of Subjective Knowledge, Objective
Knowledge, and Usage Experience on Decision Making: An Exploratory Investigation," Journal of
Consumer Psychology (4:2), pp. 153-180.
General Topics
16 Thirty Second International Conference on Information Systems, Shanghai 2011
Riggio, R.E., Widaman, K.F., and Friedman, H.S. 1985. "Actual and Perceived Emotional Sending and
Personality Correlates," Journal of Nonverbal Behavior (9), pp. 69-83.
Roch, S.G., Lane, J.A.S., Samuelson, C.D., Allison, S.T., and Dent, J.L. 2000. "Cognitive Load and the
Equality Heuristic: A Two-Stage Model of Resource Overconsumption in Small Groups,"
Organizational Behavior and Human Decision Processes (83:2), pp. 185-212.
Ross, L., Greene, D., and House, P. 1977. "The "False Consensus Effect": An Egocentric Bias in Social
Perception and Attribution Processes," Journal of Experimental Social Psychology (13:3), pp.
279-301.
Rouse, A.C., and Corbitt, B. 2007. "Understanding Informations Systems Outsourcing Success and Risks
through the Lens of Cognitive Biases," 15th European Conference on Information Systems,
University of St. Gallen, St. Gallen, Switzerland, pp. 1167-1178.
Russo, J.E., and Schoemaker, P.J.H. 1992. "Managing Overconfidence," Sloan Management Review
(33:2), Winter92, pp. 7-17.
Sabherwa, R., and King, W.R. 1995. "An Empirical Taxonomy of the Decision- Making Processes
Concerning Strategic Applications of Information Systems," Journal of Management
Information Systems (11:4), Spring95, pp. 177-214.
Shepherd, D., Zacharakis, A., and Baron, R. 2003. "Vcs' Decision Processes: Evidence Suggesting More
Experience May Not Always Be Better," Journal of Business Venturing (18:3), pp. 381-401.
Simon, H.A. 1959. "Theories of Decision-Making in Economics and Behavioral Science," American
Economic Review (49:3), pp. 253-283.
Simon, M., Houghton, S.M., and Aquino, L. 1999. "Cognitive Biases, Risk Perception, and Venture
Formation: How Individuals Decide to Start Companies," Journal of Business Venturing (15), pp.
113-134.
Skala, D. 2008. "Overconfidence in Psychology and Finance - an Interdisciplinary Literature Review,"
Bank i Kredyt (4), pp. 33-50.
Sporer, S., Penrod, S.D., Read, D., and Cutler, B.L. 1995. "Gaining Confidence in Confidence: A New
Meta-Analysis on the Confidence-Accuracy Relationship in Eyewitness Identification Studies,"
Psychological Bulletin (118), pp. 315-327.
Taylor, S.E., and Brown, J.D. 1988. "Illusion and Well-Being: A Social Psychological Perspective on
Mental Health," Psychological Bulletin (103:2), pp. 193-210.
Thompson, S.C. 1999. "Illusions of Control: How We Overestimate Our Personal Influence," Current
Directions in Psychological Science (8:6), pp. 187-190.
Tracey, J., Arroll, B., Richmond, D., and Barham, P. 1997. "The Validity of General Practitioners' Self
Assessment of Knowledge: Cross Sectional Study," British Medical Journal (315:7120), pp. 1426-
1428.
van de Venter, G., and Michayluk, D. 2008. "An Insight into Overconfidence in the Forecasting Abilities of
Financial Advisors," Australian Journal of Management (32:3), p. 545.
Yaniv, I., and Foster, D.P. 1997. "Precision and Accuracy of Judgmental Estimation," Journal of
Behavioral Decision Making (10), pp. 21-32.
Yeo, K.T. 2002. "Critical Failure Factors in Information System Projects," International Journal of
Project Management (20:3), pp. 241-246.
Zacharakis, A., and Shepherd, D.A. 1999. "Knowledge, Overconfidence and the Quality of Venture
Capitalists' Decisions," 13th Annual National Conference of the United States Association for
Small Business and Entrepreneurship, M.D. Meeks and S. Kunkel (eds.), San Diego, CA: United
States Association for Small Business and Entrepreneurship.
Overconfidence in IT Investment Decisions
Thirty Second International Conference on Information Systems, Shanghai 2011 17
Appendix
Table 1. Questions related to Experienced Knowledge
Questions Response categories Source
What is your current level of hierarchy? 1 = Lower; 2 = Middle; 3 = Upper Mgmt.
What is your total professional experience? In years
What is your experience in IT departments? In years
What is your ITO experience? ITO decisions involved in.
Based on several studies (e.g., Forbes
2005; McKenzie et al. 2008;
Menkhoff et al. 2006; van de Venter
and Michayluk 2008)
Table 2. Questions used in Calibration Study*
Knowledge Questions** Source
What percentage of German firms outsource their data centers?
What percentage of IT decision-makers see “best practices” as very important for ITO?
What percentage of IT decisions-makers would consult strategic advisors for ITO decisions?
What percentage of a firm’s total IT budget is expended on IT outsourcing?
What percentage of German companies expect cost reductions from IT outsourcing?
All knowledge questions were
adapted from two current IT
outsourcing surveys
(Orange Business Services 2009;
Messerschmidt et al. 2008)
Self-Assessment of Performance Source
Compared to your peers (i.e., other IT decision-makers), how well do you think you scored?
Please indicate below the percentage of your peers that you think you have outperformed in
terms of providing small interval errors [The authors: Interval error was explained to the
participants during the calibration study]. For example, if you think you scored better than
90% of your peers, then move the slider to 90. Or if you think you scored better than 15% of
your peers, move the slider to 15.
Based on Kruger and Dunning
(1999)
* Note that we derived measures (i.e., hit rate, interval size and error) for Miscalibration, Objective Knowledge and the Better-
than-Average effect from the calibration study
** For all knowledge questions, respondents had to provide a low and a high estimate that they were 90% certain to capture the
correct answer
Table 3. Questions related to Subjective Knowledge
Questions Scale Source Interitem Reliability
Regarding ITO decisions, how would you rate
your …
(1) … skills …
(2) … performance …
(3) … success …
… in comparison to other IT decision-makers?
7-point Likert scale: 1
(much worse than the
average) - 7 (much
better than the
average)
Adapted from Larrick et al.
2007; Menkhoff et al. 2006 α=.775
Table 4. Questions related to Illusion of Control (Control Position)
Questions Scale Source
Most of the news on ITO is not surprising to me.
When a service provider does not meet the requirements as
arranged, it is not surprising to me.
When one of my ITO decisions meets all the requirements, it is
due to my good planning.
Most of the time I can predict very early if an ITO decision is
going to be a success or not.
7-point Likert scale ranging from 1
(totally disagree) to 7 (totally agree)
Adapted from Burger
and Cooper (1979)
... In the IS literature, despite calls for IS scholars to explore the psychological and social aspects of IT SDM (Bannister and Remenyi 2000;Waema and Walsham 1990), and although a number of studies emphasize the need for stakeholder involvement and dialogue for the success of IT SDM (e.g., Bai and Lee 2003;Lee and Pai 2003;Tallon and Kraemer 2007), few IS studies have examined in detail the nature and impact of either teamlevel interactions or individual-level characteristics. The factors that have received some attention include (Hall and Liedtka 2005), cognitive structures (Miranda and Kim 2006), shared domain knowledge (Ranganathan and Sethi 2002), and overconfidence of decision makers (Vetter et al. 2011). ...
... Finally, as shown in Table 3, the intuitive aspects of SDM have received the least attention in the IS literature. In our sample, three papers discuss the importance of investigating intuition in the context of IT SDM (e.g., Bannister and Remenyi 2000;Boonstra 2003;Waema and Walsham 1990), and five empirical IS studies explore the role of intuition in IT SDM in some detail (Frisk et al. 2014;Hackney and Little 1999;Miranda and Kim 2006;Tallon and Kraemer 2007;Vetter et al. 2011). The papers mentioned in the two paragraphs above show clearly the importance of all three aspects of strategic IT decision-making processes-rational, intuitive, and political-identified in Figure 1. ...
Conference Paper
Full-text available
This paper presents a new model of strategic IT decision-making processes based on reviews of the management and strategic IT decision-making literature. Strategic IT decisions are important and infrequent IT-related decisions made by the top leaders of an organization that have, or potentially have, a major impact on organizational health and survival. It is argued that if the decision-making processes behind strategic IT decisions are better understood, it may be possible to make better decisions, reduce cost overruns, and/or explain why some major IT-related projects have struggled to realize expected benefits.
... To the best of our knowledge, there is no scale development procedure concerning objective measurement. Similar research papers concerning objective measurement followed no explicit procedure, or at least did not report so (e.g., Aggarwal et al., 2015;Vetter et al., 2011). Others simply adapted existing objective measurements, e.g., tests (Motta et al., 2018). ...
Conference Paper
Full-text available
Humans multitudinously interact with Artificial Intelligence (AI) as it permeates every aspect of contemporary professional and private life. The socio-technical competencies of humans, i.e., their AI literacy, shape human-AI interactions. While academia does explore AI literacy measurement, current literature exclusively approaches the topic from a subjective perspective. This study draws on a well-established scale development procedure employing ten expert interviews, two card-sorting rounds, and a between-subject comparison study with 88 participants in two groups to define, conceptualize, and empirically validate an objective measurement instrument for AI literacy. With 16 items, our developed instrument discriminates between an AI-literate test and a control group. Furthermore, the structure of our instrument allows us to distinctly assess AI literacy aspects. We contribute to IS education research by providing a new instrument and conceptualizing AI literacy, incorporating critical themes from the literature. Practitioners may employ our instrument to assess AI literacy in their organizations.
... Nah and Benbasat (2004) already discovered that individuals with lower expertise are more likely to approve conclusions of knowledge-based systems and rate these systems as more useful. On the other hand, higher expertise might also lead employees to unjustifiably turn down computer-based recommendations owing to overconfidence bias (Vetter et al., 2011). ...
Article
Full-text available
Data analytics provides versatile decision support to help employees tackle the rising complexity of today’s business decisions. Notwithstanding the benefits of these systems, research has shown their potential for provoking discriminatory decisions. While technical causes have been studied, the human side has been mostly neglected, albeit employees mostly still need to decide to turn analytics recommendations into actions. Drawing upon theories of technology dominance and of moral disengagement, we investigate how task complexity and employees’ expertise affect the approval of discriminatory data analytics recommendations. Through two online experiments, we confirm the important role of advantageous comparison, displacement of responsibility, and dehumanization, as the cognitive moral disengagement mechanisms that facilitate such approvals. While task complexity generally enhances these mechanisms, expertise retains a critical role in analytics-supported decision-making processes. Importantly, we find that task complexity’s effects on users’ dehumanization vary: more data subjects increase dehumanization, whereas richer information on subjects has the opposite effect. By identifying the cognitive mechanisms that facilitate approvals of discriminatory data analytics recommendations, this study contributes toward designing tools, methods, and practices that combat unethical consequences of using these systems.
... They believed to be more qualified to perform the task and would perform better without this assistance. However, given that people tend to overestimate their own skills and abilities (Vetter et al. 2011), completing the task with the assistance of a CA may likely be beneficial (Bansal et al. 2021;Hemmer et al. 2022;Schemmer et al. 2022). To understand how individuals' evaluation of their own skills and abilities influences their use of CAs, we draw on the concept of self-efficacy -defined as individuals' belief in their capacity to accomplish a task and achieve their desired goals (Bandura 1978). ...
Conference Paper
Full-text available
Conversational agents (CAs) increasingly permeate our lives and offer us assistance for a myriad of tasks. Despite promising measurable benefits, CA use remains below expectations. To complement prior technology-focused research, this study takes a user-centric perspective and explores an individual's characteristics and dispositions as a factor influencing CA use. In particular, we investigate how individuals' self-efficacy, i.e., their belief in their own skills and abilities, affects their decision to seek assistance from a CA. We present the research model and study design for a laboratory experiment. In the experiment, participants complete two tasks embedded in realistic scenarios including websites with integrated CAs-that they might use for assistance. Initial results confirm the influence of individuals' self-efficacy beliefs on their decision to use CAs. By taking a human-centric perspective and observing actual behavior, we expect to contribute to CA research by exploring a factor likely to drive CA use.
... In our conceptual model, knowledge is proposed only to have a positive relationship to IS decision alignment. However, IT knowledge might, for instance, contribute to managers' overconfidence bias in decision-making (Vetter et al., 2011) and thus, cause suboptimal decision outcomes. Similarly, the model could be enriched by other perspectives, such as unlearning or organizational learning. ...
Article
Full-text available
Knowledge plays a crucial role in enhancing the quality of information system (IS) investment decisions. This study aims to explain which and how knowledge-related factors contribute to strategic alignment in IS investment decision-making. We conducted a narrative literature review of 32 papers and applied image theory to develop a conceptual model. Our results show that business understanding and shared domain knowledge are central to aligning strategic IS investment decisions. In addition, other factors enhance IS alignment, namely IT knowledge, education, awareness of impacts, system requirements, intangible benefits, IT governance, knowledge of justification methods, and metacognitive experience. Scientists might use this study to operationalize quantitative research about real decision-making, while practitioners might improve their decision process and seek suitable training thanks to this study.
... Previous IS research on information security (Rhee et al., 2005), recommender systems (Xiao and Benbasat, 2007) and DSSs (Kahai et al., 1998) has proven the existence of an illusion of control. In combination with the aforementioned tendencies to overestimate one's own abilities, the illusion of control has been rendered an antecedent of overconfidence (Vetter et al., 2011). In the context of this study, we investigate whether transparency mitigates the desire for exercisable control as suggested by signaling theory and the illusion of control. ...
Conference Paper
Full-text available
Technological advancements have enabled the emergence of increasingly intelligent and autonomous support of private decision-making. Automated financial investing by robo-advisors is exemplary of this development. For the user to benefit from digital investment management, robo-advisors must reflect user preferences. Important robo-advisor characteristics are their level of automation, the degree of control they allow customers, and their transparency. However, suitable configurations along the characteristics have not yet been determined. Specifically, users value high financial per-formance while desiring control over investments, partly caused by cognitive bias. In case of algo-rithmic superiority to human decisions in this context, a performance-control dilemma occurs. In this study, we conduct a choice-based conjoint analysis to derive user preferences of robo-advisor con-figurations and investigate the potential of transparency to alleviate the performance-control dilem-ma. Results suggest that users prefer hybrid automation and high levels of control and transparency, supporting the dilemma’s occurrence. Transparency is confirmed to be a potential mitigator of the dilemma only for some attribute levels tested. These findings enhance our understanding of user preferences in highly autonomous decision support in the presence of cognitive bias. We provide implications for theory and practice by identifying the performance-control dilemma and suggesting transparency as a mitigator.
Thesis
Healthcare systems are facing growing challenges. Developments such as demographic change and increased treatment needs are contributing to the emergence of care deficits. In many places, it is not possible to meet the increasing demand, which jeopardizes nationwide healthcare provision. Parallel to this development, transformative digitalization processes can be observed, which are accelerated by rapid technological progress and the potential of telemedicine. Technological approaches thereby affect a variety of stakeholder groups. On this basis, the cumulative dissertation takes a multi-perspective stance using empirical methods. The aim of the thesis is to illuminate and integrate the provider and consumer perspectives on the use of telemedicine in current and future patient care, as well as to identify and discuss emerging (ethical) conflicts. In addition, the work addresses implications that can be drawn from the empirical findings for the concrete design of digital technologies in care. Consequently, the work provides an interdisciplinary contribution at the intersection of business and medical informatics, health services research, and social sciences.
Conference Paper
Full-text available
The healthcare domain faces considerable challenges due to the digitization of medical processes and routines. Information technologies are designed to enable physicians to treat more patients and to increase service quality and patient safety. Despite acknowledging the rapid digital transformation of healthcare, research often neglects whether physicians are actually able to effectively decide which technology to use in which setting and whether their technology use thus effectively enhances quality and safety. Literature on cognitive biases already looked broadly at related errors in judgment and action and questioned rational behavior. Nevertheless, overconfidence, being one of the most common cognitive biases, has barely been linked to the accurate adoption and use of technology by physicians. Against this background, this research-in-progress paper proposes a framework for conducting a mixed-methods study based on the particularities of overconfidence in healthcare. We invite future research to compare our approach with established theoretical frameworks in IS research.
Article
Full-text available
An important source of people's perceptions of their performance, and potential errors in those perceptions, are chronic views people hold regarding their abilities. In support of this observation, manipulating people's general views of their ability, or altering which view seemed most relevant to a task, changed performance estimates independently of any impact on actual performance. A final study extended this analysis to why women disproportionately avoid careers in science. Women performed equally to men on a science quiz, yet underestimated their performance because they thought less of their general scientific reasoning ability than did men. They, consequently, were more likely to refuse to enter a science competition.
Article
Though Information Systems (IS) outsourcing is growing at a rapid pace, there are several risk factors and potential negative outcomes. The objectives of this study are i) to conceptualize the relationships among the IS outsourcing risk factors, the negative outcomes of these risk factors, and their impact on future reoutsourcing decisions, and ii) to test these relationships empirically using survey data and structural equation modeling. Based on data from 148 firms, we have found that future re-outsourcing decisions are strongly influenced by the negative outcomes of loss of control and the degradation of IS services. While all the risk factors influence future re-outsourcing decisions, either directly or indirectly, vendor competence problems (related to infrastructure) and vendor coordination problems have a direct effect on re-outsourcing decisions. Such variables as vendor attitude problems, vendor competence problems (related to staff quality), and in-house competence problems have an indirect effect on re-outsourcing decisions.
Article
The outsourcing industry has grown rapidly. This study synthesizes factors from prior outsourcing research and the Technology Acceptance Model (TAM) with information gathered from structured interviews of nine IT executives from seven companies to propose a model of outsourcing acceptance. TAM suggested that perceived usefulness and ease of use mediate the effects of other variables on users' attitudes toward technology. The model in this study suggests that perceived usefulness and ease of use of outsourcing mediate the effects of the external environment, prior outsourcing relationships and risks on outsourcing decision-makers.
Article
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of the participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
User resistance to information systems implementation has been identified as a salient reason for the failure of new systems and hence needs to be understood and managed. While previous research has explored the reasons for user resistance, there are gaps in our understanding of how users evaluate change related to a new information system and decide to resist it. In particular, missing in the explanation of user decision making is the concept of status quo bias, that is, that user resistance can be due to the bias or preference to stay with the current situation. Motivated thus, this study develops a model to explain user resistance prior to a new IS implementation by integrating the technology acceptance and resistance literatures with the status quo bias perspective. The results of testing the model in the context of a new enterprise system implementation indicate the central role of switching costs in increasing user resistance. Further, switching costs also mediate the relationship between other antecedents (colleague opinion and self-efficacy for change) and user resistance. Additionally, perceived value and organizational support for change are found to reduce user resistance. This research advances the theoretical understanding of user acceptance and resistance prior to a new IS implementation and offers organizations suggestions for managing such resistance.
Article
Calibration of probability judgments has attracted in recent years an increasing number of researchers as re¯ected by an expanding number of articles in the literature on judgment and decision making. The underlying funda-mental question that stimulated this line of research concerns the standards by which probability judgments could (or should) be assessed and evaluated. The most common (though certainly not exclusive) accepted criterion is what has been termecalibration', the roots of which can be traced in the well-known Brier score (Brier, 1950) and subsequent modi®cations (e.g. Murphy, 1973; Yates, 1982, 1988). Two main criteria that evolved from this line of research are customarily referred to as calibration and resolution. Calibration (or reliability) supposedly measures the accuracy of probability judgments whereas resolution measures the diagnosticity (or discriminability) of these judgments. The two major substantive and pervasive ®ndings (e.g. Lichtenstein, Fischho€, and Phillips, 1982; Keren, 1991) are overcon®dence and the interaction between the amount of overcon®dence and diculty of the task, the so-called hard±easy e€ect. Several problems have been raised with regard to research on calibration, and in this commentary l would like to focus on three of them. First, calibration studies assume (implicitly or explicitly) that probabilities are subjective (e.g. Lichtenstein, Fischho€, and Phillips, 1982) yet evaluate them by a frequentistic criterion (Gigerenzer, 1991; Keren, 1991). The validity of such a procedure remains controversial. A second problem concerns the possible tradeo€ between calibration and resolution. Yates (1982) noted that calibration and resolution are not completely independent of each other, and Keren (1991) claimed that the requirements for maximizing calibration (i.e. minimizing the discrepancies between probability judgments and the corresponding reality) and achieving high resolution may often be incompatible. A similar point has been recently made by Yaniv and Foster (1995), who studied the evaluation of interval judgments. A third problem concerns the analysis and interpretation of calibration studies. Speci®cally, Erev, Wallsten, and Budescu (1994) have eloquently described the importance of regression toward the mean in interpreting calibration studies. Similar conclusions have been reached independently by Pfeifer (1994). In a nutshell, the contribution of the papers by Erev et al. and Pfeifer is in pointing out that both overcon®dence and the hard±easy e€ect may, at least to some degree, be an artifact due to regression toward the mean. In re¯ecting on the articles in this special volume, I will focus on these three issues and examine how they are treated by the di€erent authors. I will end this commentary by raising the question of what has been learned from thirty years of research on calibration of probabilities, and will o€er a brief (and somewhat skeptical) answer to the question. RANDOM ERROR MODELS A common underlying thread of several papers to which this commentary is addressed (i.e. Budescu, Erev, Wallsten (Parts I and II); Juslin, Olsson, and BjoÈ rkman; Wallsten, Budescu, Erev, and Diederich) is the phenomenon of regression-toward-the-mean (or in the more general case, reversion to the mean). They cite, and heavily hinge on, the paper by Erev, Wallsten, and Budescu (1994). Notwithstanding, and certainly not undermining, the importance of the contribution by Erev et al. (1994) and Pfeifer (1994), it is important to stress two points.
Article
Overconfidence may hinder effective venture capitalist (VC) investment decisions. Overconfident VCs may overestimate the likelihood that a venture will succeed (or fail). Two types of knowledge are important. Primary knowledge is information relating to the current venture. If VCs rely heavily on experience about similar ventures, they may be overconfident. Likewise, meta-knowledge - knowing what you do and don't know - affects overconfidence. VCs with low meta-knowledge likely don't seek enough information to make an informed decision. Additionally, individual characteristics, interaction among syndicate partners, and the task itself likely influence overconfidence. The paper looks at these factors and suggests ways to control overconfidence.