Content uploaded by Behnido Calida
Author content
All content in this area was uploaded by Behnido Calida on Aug 29, 2015
Content may be subject to copyright.
1
BIASES AND HEURISTICS IN R&D PROGRAM
MANAGEMENT DECISIONS
Patrick T. Hester, Ph.D., Old Dominion University
Behnido Y. Calida, Old Dominion University
______________________________________________________________________________
Abstract
Whenever a program manager is faced with a
programmatic decision, his or her beliefs and
experiences bias how he or she views the decision and
what information is utilized to help arrive at a final
alternative. These biases can either be intentional or
unintentional. Intentional biases are a result of the
program manager’s willful decision to bias the results
of their assessment, perhaps due to a preference of one
alternative over another. Even without intentional
biases to account for, all human beings have
unintentional cognitive biases that affect the
information that is elicited from them. These cognitive
biases include the availability heuristic, conjunction
fallacy, representativeness heuristic, and anchoring.
These biases can be both hurtful and helpful to the
decision process. These relationships and their role as
both a help and a hindrance to program management
decisions within the context of an R&D environment
will be explored in this paper.
Key Words
Research and development, biases, heuristics, R&D
program management
Introduction
Research and development (R&D) is a key driver of
societal technological innovation. The innovation
process has been conceptualized as either a: 1) linear
(OECD, 1993; Ramsey, 1986, p.7), or 2) nonlinear
process, consisting of basic research, followed by
applied research, then by development and eventually
commercialization. Basic research refers to “quests for
basic understanding” thereby producing a “continuing
stream” of new knowledge and fundamental ideas.
Among these new ideas, a few that show technical
feasibility are screened towards applied research, then
narrowed further towards more detailed development.
The remaining few that show combined technical as
well as market feasibility will eventually be adopted by
innovating firms to be transitioned into commercial
markets. It is clear that even the linear model of
innovation is a complex process requiring significant
effort, both from a programmatic and a research
standpoint.
For the purposes of this paper, those who provide
sources of funding for research (private organizations,
private foundations, public-sector agencies) are
referred to as sponsors, while those that seek to obtain
funding for the purposes of undertaking sponsored
research are referred to as researchers (or research
organization at the enterprise level). This paper
focuses on the analysis of sponsors within the context
of profit-driven R&D enterprises. Several issues arise
in the process of establishing a successful R&D
organization; among these is large capital requirements
and inherent technological risk (Braunstein et al.,
1980). At the heart of sponsor funding decisions are
individual program managers who must make
programmatic decisions (pursuing a new program or
project, abandoning a current program or project,
involving subcontracting, increasing/decreasing
resources, selecting between alternative
projects/technologies). Since these decisions originate
with individuals, who are prone to errors in judgment
and decision making, this paper focuses on the
identification, understanding and resolution of these
mistakes.
Whenever a program manager must make a
programmatic decision within a research and
development (R&D) organizational context, their
beliefs and experiences bias how they view the
scenario and what information they choose to utilize in
their decision. These biases can take the form of either
intentional or unintentional biases. Intentional biases
are a result of the program manager’s willful decision
to bias the results of their assessment. This willful
deceit can occur due to preference of one alternative
over another. The program manager may prefer one
alternative over another due to personal relationships
the individual has. An example would be a research
organization that the program manager used to work
for, that he/she has a significant vested interest in
seeing that organization continue to succeed. If the
program manager were to have an interest in seeing
this organization succeed, then the program manager
may intentionally bias their decisions by providing an
unbalanced amount of funding to that particular
research organization. Alternatively, the program
manager may have a reason not to prefer a particular
2
research organization and may intentionally bias the
results accordingly (e.g. the program manager has a
personal problem with an employee of a particular
organization or he/she was fired from an organization).
Typically, these intentional biases are easier for an
outside observer to recognize as strong connections
between the program manager and his/her intentionally
biased choice (such as significant financial or personal
connections) should emerge. It is important to note
that the vast majority of program managers will not
exhibit this behavior, but it is important for the analyst
to be cognizant of it nonetheless. Given their scarcity,
and the ease with which they are identified (not to
mention the ease with which a program manager can
choose to avoid them) these intentional biases are
mentioned merely for completeness’ sake.
It may be mistakenly assumed that, because an
individual is a program manager in a particular subject
area, or he or she has significant program management
experience, that he or she is perfectly capable of
making rational decisions without biases. Even
without intentional biases to account for, all human
beings have unintentional cognitive biases that affect
their decision making. These cognitive biases include
behaviors such as the availability heuristic, conjunction
fallacy, representativeness heuristic, and anchoring and
adjustment. It is these unintentional biases that operate
at the subconscious level, and are thus more difficult to
prevent, that are the focus of this paper. The most
common biases are discussed at length, including their
potential advantages and disadvantages with respect to
program management decisions. The paper concludes
with some recommendations for program managers.
Biases and Heuristics
Following is a discussion of several unintentional
biases and heuristics that can affect program
management decisions. They are discussed in terms of
their potential benefits and drawbacks. While biases
and heuristics may be seen in a negative light, this
perspective is not universally true. It will be shown in
this paper that several of these techniques can help a
program manager make sound, effective decisions with
minimal cognitive effort. The key is for the program
manager to recognize when these are being utilized and
reject the use of them when it may be harmful in the
program management process.
Availability Heuristic
The availability heuristic refers to the practice of
basing probabilistic evidence on an available piece of
information in one’s own set of experiences (Tversky
and Kahneman, 1973; Tversky and Kahneman, 1974).
That is to say, humans estimate the likelihood of
an event based on a similar event that they can
remember (which is by definition, from a biased and
unrepresentative sample). Further, since newer events
provide greater saliency in one’s mind, they influence
an individual’s reasoning in larger proportion than
older events. Additionally, events with unusual
characteristics stand out in one’s mind (you don’t
remember the hundreds of times you went to a given
restaurant, but you definitely remember the time you
got food poisoning).
Since experienced program managers presumably
have a larger experienced sampling of events when
compared with inexperienced colleagues, it is likely
that their propensity for the availability heuristic will
decrease as their experience level increases (thereby
rendering their samples as more representative of the
entire population). They may, for instance, have a
better ability to provide a judgment of a particular
research organization based on years of experience
with that company. However, a more naïve program
manager may be able to provide a better result if he/she
has experienced a relevant event recently, whereas a
program manager with many years of relevant
experience (none of which are recent), may not be as
likely to provide useful information. For example, a
research organization may have had troubles in the past
(and thus be under scrutiny from a more experienced
program manager), whereas they may have replaced
their CEO and significantly improved their
performance in recent years, which catches the
attention of a newer program manager, encouraging
him/her to fund a project to that particular organization
(given his/her unawareness of previous performance
issues with the prior CEO).
Further, individuals may be biased based on the
retrieval mechanism that is utilized to obtain the
memory. Depending on who is asking the question, for
example, an individual may consciously or
unconsciously block memories. The availability
heuristic can be a hindrance to effective program
management decisions. In order to combat this
problem, program managers, both experienced and
inexperienced, should be sure to understand how their
experiences bias the data they retrieve about a
particular scenario.
Representativeness Heuristic
The representativeness heuristic (Tversky and
Kahneman, 1974) refers to the phenomena when
individuals assume commonalities between objects and
estimate probabilities accordingly. For example, a
program manager has estimated the probability that a
proposed line of research will succeed (and thus
decided whether or not to fund the research based on
its value when compared with a predetermined
3
threshold) based on the assumption that a current line
of research is similar to this previous research line,
thus, estimating their probability of success to be the
same. This determination of similarity between objects
is typically performed by comparing their attributes.
Individuals compute a running tally of matches versus
mismatches and then estimate whether or not the item
fits a category based on the total. Once the item is
categorized, automatic category-based judgments are
made about the member item. Using this type of
analysis has its issues. There may, in fact, be a glaring
difference between the two lines of research that the
program manager is overlooking. Similarity (in terms
of research organization, budget, duration, or research
focus) does not imply a similar probability of success.
The new line of research may have subtleties inherent
in it that make it significantly riskier. Or, it may be
able to leverage the results of the earlier research, thus
decreasing the inherent risk.
Additionally, program managers must be careful
with category associations as they can be irrational,
stereotypical or morally troublesome (e.g., when
comparing researchers at the individual level). They
may subconsciously influence their actions towards
and attitude about the underlying group members.
To combat this bias, individuals must use base
rates to compare the underlying category probability
versus the specific scenario (e.g., what is the
probability that any new research program will
succeed, given similar circumstances). Then, the base
rate can be adjusted to accurately reflect the specific
scenario’s characteristics.
It should be noted that availability and
representativeness are often confused, but they are not
the same phenomenon. With availability, individual
instances are retrieved and a judgment concerning the
frequency of the item is made based on the item’s
saliency and ease of information retrieval.
Alternatively, representativeness involves retrieving
information about generic concepts and then a
similarity match is made between the item in question
and a proposed category. The category association,
along with goodness-of-match or degree of similarity
produces confidence or a frequency estimate.
Conjunction Fallacy
Another bias that program managers may be prone to is
the conjunction fallacy. Tversky and Kahneman
(1983) introduce this phenomenon with the following
example: Linda is 31, single, outspoken and very
bright. She majored in philosophy. As a student, she
was deeply concerned with issues of discrimination and
social justice and also participated in antinuclear
demonstrations. Is she more likely to be (a) a bank
teller, or (b) a bank teller and active in the feminist
movement?
The overwhelming majority of those survey
respondents answered b, despite the fact that b is more
restrictive (and therefore less probable) than a. People
report the more complicated scenario as being “more
real” or that it “made more sense.” A corollary for
program managers could be the following: Company A
is a large, private industry R&D enterprise. It has been
profitable as one of the top 100 R&D enterprises in
terms of total expenditures for 50 years, and has
NASA, Kellogg and Ford among its previous clients.
Is Company A more likely to be (a) successful on its
next research project or (b) successful on its next
research project and profitable for the upcoming year?
Program managers may be inclined to choose b, given
the 50 year history of success for the organization.
Fundamentally, however, this cannot be the case, as the
axioms of probability prevent the combination of two
events from being more probable than either of the two
individual events.
The conjunction fallacy is counteracted by
analyzing individual event probabilities and then
combining them. Individuals often make this mistake
and it is possible program managers can be prone to
this type of fallacy as well. Program managers should
not be fooled into thinking that success or failure of an
organization or a particular project is more or less
likely than is allowed per the laws of probability. This
will force program managers to be realistic about their
assessments and projections.
Anchoring and Adjustment
Another bias is the anchoring and adjustment heuristic,
observed by Tversky and Kahneman (1973). Humans
establish anchors as starting points for their judgments
and base subsequent observations on the initial value
that was provided to them. In other words, if the
program manager is provided a baseline value, he/she
can be influenced to a degree where subsequent values
will be anchored by the provided baseline value.
Further, values provided early in the estimation process
have a larger weight than those provided late in the
process. Additionally, anchors tend to bias information
that is sought and included in one’s analysis. The
status quo is a powerful anchor as well. It is often
easier for individuals to take an existing value and
adjust it to their specifications. For example, a
program manager estimates how long it will take a
project to be completed based on previous projects.
The anchoring and adjustment effect can be either
beneficial or detrimental. For example, an organization
4
that is seeking funding may ask for $1 million in seed
funding for a project. This initial anchor will bias the
program manager’s funding decision. If he/she decides
to fund the project (and subsequently determines a
funding dollar amount), then their funding award will
be closely tied to the anchor provided by the research
organization. Thus, their award will likely be near $1
million. If the perceived value of the project is well
below $1 million, the program manager has been
anchored by the research organization’s budget and the
sponsor has overpaid. If the perceived value of the
project is above $1 million, the program manager,
similarly anchored to the proposed budget, will
underpay based on the aggressive budgeting of the
research organization, albeit unintentionally.
Program managers can combat this effect by
independently generating funding allocations (or other
required estimates) before examining the budget of a
proposed project. Then, they can hedge their bets by
evaluating the budget proposals from the research
organizations and make decisions strategically.
Returning to the project example, if the same project
with a budget of $1 million is proposed, but the
funding organization was prepared to provide $2
million in funding (based on an independent analysis of
the project’s worth), they will be relieved to find that
the researching organization is perceived to be $1
million under the allowable budget. Thus, they will
award $1 million and be pleased that they are “getting
a bargain.” However, if the funding organization is
prepared to provide $2 million for the research and the
researching organization is asking for $3 million, the
sponsor will offer $2 million and take their chances
that the researching organization will accept their offer.
They will ascertain that the researching organization is
seeking too high a level of funding. By independently
generating their budget estimates, free of the research
organization’s anchors, the program managers resist
overpaying for R&D.
Recognition Heuristic
The recognition heuristic refers to the heuristic by
which an individual selects an alternative that is the
most familiar to them. While it seems to be a
fundamentally unsound approach to decision making,
Goldstein & Gigerenzer (1999) discovered
experimentally that this approach often outperforms
more rigorous approaches to decision making. It can
be useful for “on the fly” decision making in
inconsequential scenarios such as deciding on a
restaurant while on a road trip based on restaurants you
recognize (e.g. McDonald’s or Subway) or buying a
pair of shoes based on brands that you’ve worn in the
past and know to be reliable (e.g. Nike). With a
scenario such as sponsor funding decisions, this
heuristic would seem to have no place. After all,
funding decisions should be made based on a more
rigorous approach than the recognition of an
organization by a participating program manager.
However, this approach can be useful if only to
recognize which researchers have a negative reputation
in preceding interactions with them. For example, a
sponsor may remember a research organization
negatively based on previous poor performance by that
organization. This can be a useful heuristic in this
case, as a quick decision aid to weed out inferior
research organizations. It can only be dangerous,
however, if a sponsor uses it as a basis for picking
organizations based on recognition of previous
performance. This can lead to crowding-out effects, as
sponsors begin to award funding to the same
researchers on a continuous basis and new researchers
have trouble obtaining funding. The negative and
positive implementations of this bias must be carefully
weighed by program managers.
Duration Neglect
Duration neglect is another bias that may affect
sponsors. Typically individuals only view historic
experiences with reference to the peak and the end
state. If research organization A took 2 years to
complete a project and charged $1.5M and research
organization B took 4 years to complete a similar
project and charged $1M, the more positive memory
will be the latter scenario. Individuals typically will
remember the cost savings and not the time difference.
This influences program managers as well since certain
factors will influence them in different manners. Thus,
it is important that sponsors take all factors into
account when making program management related
decisions. This will prevent them from unconsciously
biasing one factor over another. The corollary to this is
a conscious desire of the program manager to choose
the lowest cost research organization or the research
proposal with the shortest time horizon.
Diversification Bias
Diversification bias is another bias that can influence
program managers in the R&D process. Individuals
like to think that they value research portfolio diversity,
but over time, when faced with the same choice
multiple times, they often make the same choice, thus,
tending to regress towards the mean. If a program
manager was asked to provide a research portfolio for
their organization for the next three years, they will
likely diversify the projected portfolio. In reality,
when the time comes to make funding decisions, the
program manager’s decisions will more closely mirror
those of historical portfolios rather than reflecting the
projected diversity of earlier predictions. This is due in
part to the aforementioned biases and heuristics, which
5
may lead a program manager to maintain funding to
particular research organizations on a routine basis
rather than try funding a new organization. Once
again, as long as this action is performed consciously,
there is no issue. It is when the program manager is
acting without knowing what he/she is doing that a
problem may occur.
Akin to the diversification bias is the hot hand
fallacy. In the hot hand fallacy, individuals attribute a
pattern to a random streak of positive (or negative)
performance. Thus, a research organization may have
successfully over-delivered on several recent projects
in a row, thereby securing the goodwill of a sponsor in
obtaining follow-on funding. This goodwill may be ill-
fated, however, as the research organization is slated to
regress at some point, not as a byproduct of their
incompetence or error, but as a result of the inherent
randomness in human performance. For this reason,
sponsors would do well to diversify their R&D
investments in order to ensure a varied research
portfolio (and to prevent the over-achievers from all
regressing to the mean at the same time, thereby
crippling the organization’s research portfolio).
Conclusions
R&D program managers continuously face difficult
programmatic decisions. In order to deal with the
corresponding overabundance of information, they
must rely on biases and heuristics to make decisions in
an efficient manner. These biases and heuristics can be
exhibited in both an intentional and an unintentional
manner. While intentional biases are readily identified,
unintentional biases and heuristics require cognizance
on the behalf of the program manager. This paper
explored several biases and heuristics and discussed
their relevance to an R&D program management
setting. The key is for the program manager to
recognize when these biases are being utilized and
reject the use of them when it may be harmful in the
program management process. The authors believe the
problem explored by this paper necessitates a
methodological approach to be developed as future
work in support of programmatic decision making.
References
Braunstein, Y., Baumol, W.J., Mansfield, E., The
economics of R&D. TIMS Studies in the
Management Sciences, 15 (1980), pp. 19-32.
Goldstein, D. G., and Gigerenzer, G., “The recognition
heuristic: How ignorance makes us smart.” In G.
Gigerenzer, and P. M. Todd, (Eds.). Simple
heuristics that make us smart, Oxford: Oxford
University Press (1999).
OECD, Organization of Economic Co-operation and
Development, Proposed Standard Practice for
Surveys of Research and Experimental
Development: Frascati Manual, OECD, Paris, 5th
ed. (1993).
Ramsey, J. E., Research and Development: Project
Selection Criteria. Michigan, USA: UMI Research
Press (1986).
Tversky, A. and Kahneman, D., “Availability: a
heuristic for judging frequency and probability,”
Cognitive Psychology, 5 (1973), pp. 207-232.
Tversky, A. and Kahneman, D., “Judgment under
uncertainty: Heuristics and biases,” Science, 185
(1974), pp. 1124-1130.
Tversky, A. and Kahneman, D., “Extension versus
intuitive reasoning: The conjunction fallacy in
probability judgment,” Psychological Review, 90
(1983), pp. 293-315.
About the Authors
Patrick T. Hester is an Assistant Professor of
Engineering Management and Systems Engineering at
Old Dominion University. He received a Ph.D. in Risk
and Reliability Engineering (2007) at Vanderbilt
University and a B.S.in Naval Architecture and Marine
Engineering (2001) from Webb Institute of Naval
Architecture. Prior to joining the faculty at Old
Dominion University, he was a Graduate Student
Researcher in the Security Systems Analysis
Department at Sandia National Laboratories. His
research interests include multi-objective decision
making under uncertainty, complex system
governance, probabilistic and non-probabilistic
uncertainty analysis, and decision making using
modeling and simulation.
Behnido Calida is a Ph.D. student in the Engineering
Management and System Engineering department at
Old Dominion University. He is also currently a
Graduate Research Assistant in Old Dominion
University's National Centers for System of Systems
Engineering (NCSoSE). He has a B.S. in Applied
Physics from the University of the Philippines (2000)
and received his Masters in Engineering Management
also from Old Dominion University (2008),
respectively. His research interests include the study of
concepts, methodologies and applications pertaining to
traditional engineering systems, Systems of Systems,
complex systems engineering specifically within R&D
governance fields.