Content uploaded by L.H. Shu
Author content
All content in this area was uploaded by L.H. Shu on Feb 27, 2020
Content may be subject to copyright.
Transactions of the SDPS:
Journal of Integrated Design and Process Science
17 (4), 2013, 19-35
DOI 10.3233/jid-2013-0019
http://www.sdpsnet.org
1092-0617/$27.50© 2013 - Society for Design and Process Science. All rights reserved. Published by IOS Press
Considering Confirmation Bias in Design and Design
Research
Gregory M. Hallihan and L.H. Shu*
Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
Abstract Confirmation bias is an innate and pervasive human tendency to preferentially attempt to validate beliefs
instead of invalidating them. However the design community, which is increasingly concerned with the cognition of
designers, has largely overlooked this phenomenon. This paper discusses the relevance of confirmation bias with
respect to its potential to influence designers and design researchers. The existing literature suggests that confirmation
bias is present among designers and can contribute to undesirable design outcomes. Our emphasis is placed on the
role of confirmation bias in fixation and the misapplication of biological analogies in design. We discuss the results
of our experimental study that suggest confirmation bias may skew data evaluation in design research, contributing to
deviations from scientifically accurate conclusions. We also discuss possible methods to mitigate confirmation bias.
Keywords: Cognitive bias, conceptual design, design creativity, design evaluation, design methodology
1. Introduction
As early as Sir Francis Bacon’s Novum Organum, an inherent bias in human information processing has
been observed. Referred to by psychologists as confirmation bias, this involves a tendency to give more
weight, or attend more acutely, to information that validates hypotheses or beliefs than to similarly
diagnostic and relevant information that invalidates them (Spedding et al., 1863). While confirmation bias
does not categorically result in poor decision-making outcomes, failing to consider decision-relevant
information is certainly a deviation from normatively optimal decision-making. It is when these deviations
systematically result in erroneous conclusions or perpetuate fallacious beliefs that understanding and
mitigating confirmation bias is a priority.
The aim of this work is to investigate whether designers and design researchers are subject to the
influence of confirmation bias. It is worthwhile for the design community to consider how confirmation
bias may account for observed design biases and poor decisions among designers, and to ensure that
confirmation bias does not detrimentally influence design researchers themselves. This paper begins with
a review of the literature describing the effect of confirmation bias across domains, and its potential to
contribute to undesirable decision outcomes. The discussion then shifts to exploring how confirmation bias
could influence the design process, including its relation to known design biases. We hypothesize based on
this previous literature that confirmation bias may skew data evaluation in design research, contributing to
deviations from scientifically accurate conclusions. We present an experiment that aims to study whether
confirmation bias has the potential to influence design research. The results suggest that confirmation bias
is present in the intuitive evaluation of design hypotheses even among individuals with research experience.
* Corresponding author. Email: shu@mie.utoronto.ca Tel: (+1)416-946-3028.
20
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
Finally, a summary is presented which discusses research conclusions and approaches to mitigate
confirmation bias.
2. Confirmation Bias
The earliest empirical research on confirmation bias in modern academic journals seems to come from
Peter Wason (1960; 1968), who reported a series of experiments on the subject in the 1960’s. Wason
demonstrated that individuals engaged in simple logic and rule-deduction tasks have an innate desire to
confirm beliefs rather than disconfirm them. However, Johnson-Laird et al.(1970) replicated one of
Wason’s (1968) experiments and found that individuals were less prone to confirmatory bias when the
stimuli evaluated represented a realistic relationship as opposed to an arbitrary one. While the results of
Johnson-Laird et al. (1970) imply that confirmation bias is less prevalent in real-world decision-making,
Nickerson (1998) presents a review of the literature on confirmation bias, providing evidence of its
influence on cognition in several practical contexts.
2.1. The prevalence of confirmation bias
Confirmation bias has been observed to influence human judgement in seemingly every domain in which
it has been studied, including: formal logic, medicine, politics, systems design, management, science,
information analysis, law, and personal judgement (Burke, 2007; Cheikes et al., 2004; Dunbar, 1997;
Isenberg, 1988; Lehner et al., 2008; Nickerson, 1998; Silverman, 1992). One possible reason for this
pervasiveness is that individuals find it easier to think of information that supports their beliefs than
information that refutes them (Koriat et al., 1980). This explanation relates to a reliance on cognitive
heuristics, specifically the availability heuristic (an overreliance on the most available information in
memory when making judgements (Tversky & Kahneman, 1982). Fiske and Taylor (1984) propose that
humans have an innate desire to conserve cognitive effort during cognitive processing, which leads to a
reliance on cognitive heuristics to simplify information processing. Similarly, Klayman and Ha (1987)
argue that confirmation bias provides a framework which makes searching for decision-relevant
information more efficient than a random search.
Other researchers theorize that confirmation bias may reflect a pragmatic decision-making approach that
serves to minimize errors in real-world tasks (Friedrich, 1993). In this sense, confirmatory test strategies
may have arisen due to their benefit to fitness from an evolutionary perspective, a theory that Cosmides
(1989) suggests explains individuals’ enhanced performance on these tasks when they can be perceived as
social contracts. Therefore, confirmation bias may be inherent in the way human beings process information,
without being limited to any specific information-processing task or domain. While some researchers argue
that confirmation bias is adaptive in real-world decision-making tasks, there has been significant research
demonstrating that it can also lead to objectively poor decisions and outcomes in a host of real-world and
experimental tasks.
2.2. The influence of confirmation bias
Nickerson (1998) reports that confirmation bias can lead individuals to: fail to use disconfirming
evidence to adjust their beliefs, accept confirming evidence too easily, misinterpret disconfirming evidence,
and improperly assess the diagnostic weight of evidence. Logically, when these tendencies result in
erroneous conclusions, confirmation bias has a negative influence. However, as previously mentioned,
these tendencies may be part of pragmatic and adaptable decision-making strategies (Friedrich, 1993).
Therefore, the influence of confirmation bias is best characterized in relation to the goals of the decision-
maker (e.g., efficiency, accuracy, practicality, error minimization). One way to define this influence may
be through the use of decision models that quantify the costs and benefits associated with decision outcomes
(e.g., expected utility theory (Bernoulli, 1738), signal detection theory (Green & Swets, 1966)). However,
it is often difficult or impractical to accurately define these costs and benefits. In addition, proponents of
descriptive decision-making models argue that individuals often deviate from normative or rational models
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
21
while still arriving at “good” decisions. For example, Katsikopoulos (2012) discusses the value of both
rational and heuristic decision models in engineering design, i.e., multi-attribute theory and the Pugh
process, respectively.
Another approach is to consider how confirmation bias contributes to undesirable outcomes from a
worst-case scenario perspective. For example, confirmation bias may result in physicians inaccurately
diagnosing patients’ symptoms (Krems & Zierer, 1994; Pines, 2006), information analysts inaccurately
weighting information in accident analyses (Cheikes, et al., 2004), and prosecutors failing to consider
alternative suspects (Burke, 2007). These types of cases all suggest that it is worthwhile to consider and
mitigate the influence of confirmation bias, at least as a pre-emptive measure, when the results of
individuals’ decisions have serious implications.
2.3. Confirmation bias in design
Given the universal nature of confirmation bias, and that decision-making is a key component of the
design process (Gero, 1990) it seems logical that design cognition is also subject to confirmation bias.
However, there has been little previous research specifically examining this phenomenon in design.
Silverman and Mezher (1992) argue that confirmation bias contributes to misconceptions held by
designers and contribute to human error in the evaluation of systems design. They suggest that under time
constraints, confirmation bias could lead individuals to overly rely on evidence that confirms designs are
complete and error free. In addition, they argue that a design case from NASA (Silverman, 1985) reveals
confirmation bias has contributed to errors in spacecraft system design. In other research, Silverman (1992)
found that a design critic embedded within CAD software could effectively mitigate confirmation bias
while enhancing the speed and accuracy of participant performance. Silverman’s research suggests that
confirmation bias (as well as other cognitive heuristics and biases) can lead to errors in information
acquisition or processing during systems design, but also offers possible methods to mitigate it.
In related research, Busby (1999) examined factors that contribute to obstacles to design reuse in
mechanical design. He suggests that differences of opinion between designers and design clients present
opportunities for confirmation bias that lead designers to not respond to feedback incongruent with their
own beliefs. This has the potential to result in failures to reapply successful design solutions when the
designer wishes to adopt an alternative approach. This behaviour could be interpreted to suggest that
confirmation bias contributes to failures to adopt feasible and appropriate design strategies.
Otto and Wood (1999) address the possibility of designer bias when sorting customer needs statements
into an affinity diagram, and develop a method to understand and represent design function with decreased
bias. Maxwell et al. (2002) identify cognitive distortion and learned biases as human contributions to
complexity in design. Yan and Zeng (2011) identify cognitive conflicts as those that involve “relations
among different designers in the context of collaborative design and concurrent engineering.” Nguyen and
Zeng (2012) consider designer cognitive states and resources in identifying an inverse U shaped relationship
between designer’s mental stress and design creativity. Using similar methods, Aurup and Akgunduz (2012)
aim to reduce the effect of participant bias in the elicitation of user preferences through bio-signals. Luh et
al. (2012) include consumers’ cognitive structures in developing an empathic design method for customer-
centred products.
Our past research suggests that engineering students exhibit a confirmatory bias when referencing their
own design ideas. Specifically, Hallihan et al. (2012) presents two studies that examined the role of
confirmation bias. The results of the first study, a protocol analysis involving novice designers engaged in
a biomimetic design task, indicate that confirmation bias is present during concept generation and offer
additional insights into the influence of confirmation bias in design. The results of the second study, a
controlled experiment requiring participants to complete a concept evaluation task, suggest that decision
matrices are effective tools to reduce confirmation bias during concept evaluation.
In the observational study, 30 engineering students from a 4th-year design course were split into 9
groups of 3-4 students to work on a biomimetic design problem. Group discussions were recorded and
transcribed to generate verbal protocols, which were then qualitatively coded (Cheong et al., 2012; Hallihan,
22
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
et al., 2012). The authors found that the students were more likely to make statements that confirmed their
beliefs than to make ones that disconfirmed them (i.e., of 107 statements from 9 groups, 83% were coded
as confirmatory, while only 17% were coded as disconfirmatory [SD = 12.1%]). The results indicate that
individuals engaged in a design task are more likely to consider why their concepts are appropriate than
why they are not. A possible result of this, supported by other design research, is that designers may fail to
see the shortcomings of their own solutions (Silverman & Mezher, 1992) or will fail to attend to the
feedback of others (Busby, 1999).
In addition to research directly examining confirmation bias in design, there has been recent research
examining other cognitive heuristics and biases. For example, Foster (2012) suggests that reliance on the
availability heuristic during failure mode and effects analysis leads designers to over report the probability
of the most recent failure mode. Shu et al. (Hallihan & Shu, 2011; Shu, 2012) hypothesized that a reliance
on the availability heuristic could contribute to the misapplication of biological analogies in biomimetic
design and contribute to design fixation. The relevance of this, as previously identified, is that confirmation
bias may be a product of the availability heuristic. Individuals that rely on the availability heuristic in the
evaluation of design beliefs may also be prone to exhibiting confirmatory bias.
Viswanathan and Linsey (2011) argue that sunk-cost bias may contribute to fixation during physical
prototyping. Sunk-cost bias refers to a tendency to maintain a course of action due to previous investment
(e.g., money, action, time, etc.) despite the fact that the prior investment should no longer logically be
influencing the decision (Arkes & Blumer, 1985). This work is especially relevant as it supports the
argument that cognitive biases can contribute to known design biases (i.e., design fixation) in conceptual
design. Hallihan, et al. (2012) have also suggested that a host of cognitive heuristics and biases likely
influence designers and may contribute to biased design cognition. The following sections explore the
possible relationship between confirmation bias and known design biases in more detail.
2.4. Confirmation bias and design fixation
Design fixation has the potential to restrict diversity in the concept generation process even among
experienced designers (Linsey et al., 2010) warranting further exploration of the underlying cognition.
Cardoso and Badke-Schaub (2011) describe three causes/types of fixation in design, 1) the inappropriate
reuse of previously seen features or principles, 2) the adherence to a constant frame of thought (referred to
here as strategy adherence, and 3) the inability to retrieve prior knowledge from memory. Our discussion
focuses on confirmation bias and strategy adherence.
Confirmation bias will lead individuals to integrate information in a way congruent with their existing
beliefs. This is supported by research indicating that confirmation bias can lead designers to over rely on
information that supports their designs are error free (Silverman & Mezher, 1992), fail to accurately
consider belief-inconsistent perspectives (Busby, 1999), and fail to realize biological analogies are being
applied improperly during biomimetic design (Hallihan, et al., 2012).
Previous observational reports suggest that confirming beliefs may also prevent designers from
attending to group feedback and contribute to overconfidence in design solutions (Hallihan, et al., 2012).
Interestingly, Jermias (2006) reports that overconfidence and a preference for confirmatory information
contribute to a resistance to change commitment to strategies. Overconfidence may also contribute to, and
be a product of, confirmation bias (Nickerson, 1998). Therefore, confirmation bias and confidence may
create a positive feedback cycle, enhancing designers’ resistance towards belief-inconsistent information.
Confirmation bias likely contributes to design fixation by inhibiting designers’ ability to consider or
accurately weight information that would prompt strategy change. The logic is that a designer has no
motivation to abandon a strategy if they are not presented with, or attentive to, evidence that adopting a
new strategy would better serve their goals. Certain design strategies may not be consciously accessible to
the designer, for example Jansson and Smith (1991) demonstrate that designers may not be aware of the
source they are fixated on. However, other more explicit design considerations, such as how the physical
structure of a biological entity is analogous to a design feature, may be more representative of the types of
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
23
decisions influenced by confirmation bias. However, further research is required to determine whether
confirmatory tendencies are statistically more likely to contribute to fixation on design strategies.
2.5. Confirmation bias in biomimetic design
After a decade of research in biomimetic design, Shu (2012) reports that engineering students engaged
in the application of specific provided biological analogies tend to:
Fixate on particular words and phrases in text descriptions of biological phenomena that do not
represent the overall biological analogy, with words familiar to engineers being the most likely cause of
fixation.
Develop the same concept multiple times in response to different biological analogies intended to
elicit different solutions.
Match provided analogies with existing (and sometimes widely known) solutions, rather than
develop new solutions from the given analogy.
Similarly, Shu (2012) observes that students given access to a keyword search tool, developed to help
identify relevant analogies from a natural-language corpus, have a tendency to:
Select already familiar phenomena for application, as opposed to new or unfamiliar phenomena.
Develop solutions that pre-exist the biological analogies that are claimed to inspire them.
Hallihan and Shu (2011) propose that in both these types of biomimetic design tasks, the biased detection
and application of analogies is a result of pre-existing mental models. These effects are framed in terms of
a reliance on cognitive heuristics, such as the availability heuristic.
However, in a protocol analysis by Hallihan, et al. (2012), similar behaviours were observed to be
perpetuated by confirmatory strategies (i.e., searching for ways in which selected solutions were similar to
the source analogue and failing to consider differences). For instance, in the 107 design statements
identified (84 confirmatory and 23 disconfirmatory) 26 confirmatory cases were labelled as attempts to
verify how design solutions were similar to the biological phenomenon, and only two cases were identified
where designers considered how their solutions were different from the biological phenomenon. This
tendency is not correlated with specific performance outcomes in biomimetic design, but suggests that
designers are more likely to try to find how their solutions are analogous than how they are not. This
tendency could contribute to fixation, as discussed, and the perseverance of solutions that misapply
analogies or are overly influenced by existing engineering knowledge.
2.6. Section conclusion
At this point, the prevalence of confirmation bias in design, as well as its potential to contribute to
counterproductive design behaviour (i.e., failures to consider design-relevant feedback, overlooking design
errors, overconfidence, design fixation, the misapplication of biological analogies) seems supported by a
limited amount of research. However, further research is required to demonstrate whether confirmation bias
has the potential to definitively contribute to negative design outcomes. Another concern worthy of
additional research is the influence of confirmation bias on those who study design; the following section
explores this possibility.
3. Confirmation Bias in Design Research
Much of the literature on confirmation bias discusses its contribution to deviations from scientific
hypothesis testing, however debate surrounding the issue remains. For example, Wickens and Hollands
(2000) argue that confirmation bias during the evaluation of well-defined hypotheses may lead to cognitive
tunneling, in which individuals fail to encode or process hypothesis-inconsistent information. However,
Klayman and Ha (1987) argue that positive test strategies, often mislabeled as confirmation bias, are
actually adaptive for determining the truth or falsity of a hypothesis. Therefore, the purpose of this research
is to explore how confirmation bias influences hypothesis evaluation in design research.
24
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
In an experiment by Hallihan, et al. (2012), a group of graduate students were asked to evaluate the
hypothesis that designers fixate on example solutions for design problems they are solving. The participants
were asked to evaluate the validity of the fixation-on-examples hypothesis using design concepts as a data
set. This experiment was originally intended to examine the effectiveness of decision matrices in mitigating
confirmation bias in a concept evaluation task. However, it is revisited to examine how confirmation bias
influences individuals engaged in the analysis of design research data and hypothesis evaluation. The results
suggest that confirmation bias may skew individuals’ acquisition or interpretation of research data, however
many participants still form reasonable conclusions regarding the hypothesis.
3.1. Participants
Sixteen graduate students (2 female, 14 male) from the University of Toronto participated in the study.
Participants were recruited from a University of Toronto graduate residence and the department of
Mechanical and Industrial Engineering. The sample consisted of students from various faculties, with Law
and Engineering constituting a majority (see Table 1). We believe that the concepts were simple enough
that participants did not require extensive engineering knowledge. All the participants were graduate
students with experience conducting academic research.
Table 1. Participant demographic information
Participant No.
Age
Faculty
Gender
1
25
Zoology
Female
2
25
Genetics
Male
3
25
Sociology
Male
4
25
Medicine
Male
5
23
Law
Male
6
27
Law
Male
7
28
Law
Male
8
25
Law
Male
9
27
Engineering
Male
10
31
Engineering
Male
11
27
Engineering
Male
12
25
Engineering
Male
13
25
Engineering
Male
14
25
English
Female
15
26
Law
Male
16
25
Engineering
Male
3.2. Procedure
Participants were provided with the following background information and instructions:
There has been a significant amount of research demonstrating that designers often become fixated
by examples of successful design solutions. The research indicates that when designers see an example
solution for a design problem they are working on, they often incorporate elements of that example into
their own design solutions. This effect has been observed even when designers are instructed not to
fixate on examples, and even among experienced designers.
An experiment was run to test the hypothesis that designers fixate on examples. The design problem
and example solution given to participants can be seen at the bottom of the page. Six participants
generated solutions for the problem; their concepts can be seen on the next page. Your job is to look at
the results of the experiment (the participant concepts) to evaluate the validity of the fixation hypothesis,
stated as: “The presence of an example solution causes designers to fixate and incorporate elements of
the example into their own solutions”.
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
25
The design problem involved generating an automated watering system for a houseplant; the problem
and example solution are from Perttula and Liikkanen (2006). The six design concepts shown to participants
were solutions for the automated watering system problem (see Fig. 1). Participants were provided with
brief functional descriptions for each concept and the example. The participants were told the six concepts
had been generated by individuals from a previous study who had been exposed to the example solution.
Two of the concepts (1 and 3) were developed by the lead author to ensure they incorporated multiple
elements of the example solution. The others (2,4,5,6) were from a previous design-fixation experiment
(Hallihan & Shu, 2011) and were scored by independent raters as relatively low in fixation on the example
solution. This was done to establish the ratio of “evidence” available for evaluation as roughly 66%
disconfirmatory and 33% confirmatory. The participants’ task approximates evaluating a design research
hypothesis using realistic research data.
Example
1
2
3
4
5
6
Fig. 1. Evaluation concepts 1-6 (Hallihan et al., 2011) and example concept (Perttula and Liikkanen, 2006).
Functional descriptions not shown in figure.
Participants were told that the average completion time for this task was 15 minutes, but that they would
have as much time as they wanted to reach an optimal conclusion. Their performance was timed to examine
the influence of time spent evaluating. Timing began once participants read and indicated they understood
the instructional materials and began problem solving. After participants completed the evaluation, they
were asked what their conclusion regarding the hypothesis was.
3.2.1. Experimental conditions
Participants were evenly divided into two groups; the matrix group was provided with instructions to
use a formal decision matrix to evaluate the experimental data relative to the hypothesis, the intuition group
received no formal instructions. The instructions given to the matrix group were adapted from the Analysis
of Competing Hypotheses (ACH) methodology. ACH was developed by Heuer (1999) as a decision-making
tool to improve the forecasting accuracy of information analysts. The 8-step method directs analysts to
26
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
create a matrix that facilitates the comparison of alternative hypotheses and the evaluation of the relevance
and diagnostic value of available evidence. It has been demonstrated to reduce reliance on cognitive biases,
including confirmation bias, in complex decision-making tasks (Brasfield, 2009). Participants in the
intuition group were not given any specific evaluation directions and were only instructed to record
considerations relevant to their evaluation as point form notes on a blank sheet of paper. Participants in the
matrix group recorded relevant considerations within the matrix itself.
3.3. Coding confirmation and disconfirmation
Participants’ self-generated records were analysed to determine the ratio of confirmatory to
disconfirmatory evidence they identified and evaluated. A note indicating the consideration of evidence, or
argument for, confirming the fixation hypothesis was counted as one instance of confirmatory evidence.
Similar documentation that did not support the fixation hypothesis was counted as one instance of
disconfirmatory evidence (Figs. 2 and 3 show examples of coded data, matrix and intuition, respectively).
Degree of
Fixation
Features of Design From Example
Features of Design from Outside Sources
Concept 1
High
Fixation
- Overhead release of water(C)
- Fed by water line(C)
- Sprinkler head(C)
- Periodic release at intervals (requiring timer)(C)
- Valve of some kind(C)
- Ball float valve(D)
Concept 2
Medium
Fixation
- Fed by water line(C)
- Overhead release of water(C)
- Water wheel release(D)
- Continual release of water at fixed tempo
(no timer required)(D)
Concept 3
High
Fixation
- Overhead release of water(C)
- Sprinkler head(C)
- Fed by water line(C)
- Periodic release at timed intervals (requiring
timer)(C)
- Natural cloud source / fed by rainwater(D)
Concept 4
Low
Fixation
- ? [sic]
- Dripper release(D)
- Continual water release at natural tempo(D)
- Soil fed stream(D)
- No water line(D)
- No timer required(D)
Concept 5
Low
Fixation
- Timer required(C) (based on functional description)
- External movement brings plant to water
(instead of bringing water to plant)(D)
- Hydraulic lift required(D)
- No flow of water stream(D)
- Higher relative energy required(D)
Concept 6
Low
Fixation
- ? [sic]
- No water stream(D)
- No timer required(D)
- No external movement(D)
- Sponge fed(D)
- Soil fed hydration(D)
Fig. 2. Coded participant matrix: 12 confirming(C) and 18 disconfirming(D) instances.
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
27
Top Left: incorporates water line(C) and a similar looking sprinkler head(C)
Top Middle: incorporates a house water line(C)
Top Right: incorporates many elements(C), except the water line(D)
Bottom Left: seems to incorporate no elements(D)
Bottom Middle: incorporates predetermined intervals(C)
Bottom Right: seems to incorporate no elements(D)
Fig. 3. Coded participant notes: 5 confirming(C) and 3 disconfirming(D) instances.
3.4. Results
Three participants exhibited behaviour that was believed would confound the comparison between the
experimental groups. Participant 7 was assigned to the matrix group but failed to follow the directions
provided and instead relied on intuition. Participants 6 and 12 were assigned to the intuition group, however
they utilized matrices to formalize their decision process in a way that simulated the treatment condition.
Therefore, these participants were evaluated in the group that more accurately reflected their decision-
making procedure, resulting in two groups (Matrix:No-Matrix). Table 2 shows the data collected.
Table 2. Conditions, data, and participant conclusions
Participant
No.
Matrix
Confirm
Disconfirm
Proportion (%)
(Confirm – Disconfirm)
Time
(min)
Conclusion
1
Yes
8
10
44.4 - 55.6
15.7
Conditional
2
No
4
4
50.0 - 50.0
6.3
Conditional
3
Yes
7
8
46.7 - 53.3
14.4
Conditional
4
No
3
1
75.0 - 25.0
6.8
True
5
Yes
9
9
50.0 - 50.0
20.2
Conditional
6
Yes
12
18
40.0 - 60.0
20.5
False
7
No
1
2
33.3 - 66.7
9.7
Inconclusive
8
Yes
4
1
80.0 - 20.0
10.2
True
9
Yes
11
16
40.7 - 59.3
35.9
Inconclusive
10
No
8
6
57.1 - 42.9
19.3
True
11
Yes
5
11
31.3 - 68.8
17.9
False
12
Yes
6
18
25.0 - 75.0
22.8
Inconclusive
13
No
5
3
62.5 – 37.5
11.8
False
14
No
7
11
38.9 – 61.1
11.6
False
15
No
4
2
66.7 – 33.3
5.3
Inconclusive
16
No
2
3
40.0 – 60.0
15.2
True
3.4.1. Comparisons to ideal
Two single samples t-tests were used to compare the proportion of evidence evaluated in the matrix
group and the no-matrix group to a hypothetical ideal group. Based on the concepts provided (see Fig. 1
and Sec. 3.2) the ideal proportion of evidence available was believed to be 33.3% confirmatory and 66.7%
disconfirmatory. Because the proportion of disconfirmatory cases for both groups is calculated by dividing
the number of disconfirmatory cases by the total number of cases, once this proportion is known, the
confirmatory instance is not free to vary, therefore only one t-test per group is required. The single sample
t-tests compared the proportion of disconfirmatory evidence evaluated in the matrix group, and the no-
matrix group, to an ideal population mean (M = 66.7%) (see Fig. 4).
28
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
Fig. 4. Proportions of confirming and disconfirming instances evaluated between matrix and no matrix
conditions relative to hypothetical ideal (error bars: 95% CI).
The matrix group did not significantly underestimate the proportion of disconfirming evidence (M =
0.55, SD = 0.16) relative to the ideal, t(7) = 1.971, p = 0.089. However, the no-matrix group significantly
underestimated the proportion of disconfirming evidence (M = 0.47, SD =0.15) relative to the ideal t(7) =
3.73, p = 0.007. The inverse is necessarily true when considering the proportion of confirmatory cases
evaluated (i.e., the matrix group did not significantly over-report the proportion of confirmatory cases, the
no-matrix group did). However, there was no statistically significant difference in the proportion of
evidence evaluated between the matrix and no-matrix groups. If the matrix and no-matrix groups are
combined into a single group, the proportion of disconfirmatory evidence evaluated overall (M = 0.51, SD
= 0.16) is significantly lower than the ideal proportion t(15) = 3.96, p = 0.001.
3.4.2. Effect of time
Hallihan, et al. (2012) noted a strong and statistically significant correlation between the amount of time
participants spent solving the problem and the quantity of evidence evaluated: confirmatory (r = 0.72, p <
0.01), disconfirmatory (r = 0.76, p < 0.01). Additional analyses revealed a moderate correlation between
the proportion of evidence evaluated and the amount of time spent on task: disconfirmatory (r = 0.49),
confirmatory (r = -0.49) t(14) = 2.1, p = 0.054 (see Fig. 5). There were no significant correlations between
time and the proportion of evidence evaluated when examining the groups separately.
3.4.3. Participant conclusions
Participants were asked to provide a final conclusion regarding the fixation-on-examples hypothesis.
Conclusions fell into one of four categories (see Table 3):
True – the data indicate the hypothesis is true
Conditional – the data indicate that the hypothesis is true sometimes
Inconclusive – there is not enough data to support or reject the hypothesis
False – the hypothesis is not supported by the data
The sample size is too small to make meaningful statistical comparisons between participant conclusions
and other variables (e.g., condition or time spent evaluating). However, certain trends stand out and may
make for interesting future research. Three of the four participants who concluded the hypothesis was true
were from the no-matrix condition; this group also spent the least time evaluating the hypothesis and
evaluated the most confirmatory information. The inconclusive group spent the most time evaluating,
however the average may be over inflated by participant 9 (35.9 minutes); there is substantially more
variance in the time spent evaluating in this group than the others.
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
29
Fig. 5. Correlation between the proportion of disconfirmatory evidence evaluated and the amount of time
spent on the evaluation task (r = 0.49).
Table 3. Participant conclusions, average time spent evaluating, ratio of confirmatory to disconfirmatory
evidence evaluated, and number of participants in the matrix condition
True
Conditional
Inconclusive
False
Participant No.
4,8,10,16
1,2,3,5
7,9,12,15
6,11,13,14
Average Time (min)
(SD)
12.74
(5.62)
13.99
(5.79)
18.15
(13.74)
15.16
(4.49)
Average Con/Dis (%)
(SD)
63.0/37.0
(18.2)
47.8/52.2
(2.7)
41.4/58.6
(18.0)
43.2/56.8
(13.5)
Matrix
1/4
3/4
2/4
2/4
3.5. Discussion
3.5.1. Interpreting participant evaluations
The comparison of the proportion of evidence evaluated suggests that individuals evaluating without the
decision-matrix deviated significantly from the ideal ratio of confirmatory to disconfirmatory evidence.
However, individuals in the matrix condition did not. In addition, the trends visible in Fig. 4 suggest that
the matrix group was more similar in their evaluation to the ideal than the no matrix group. The caveat is
that there was no statistically significant difference in the proportion of evidence evaluated between the two
groups. Therefore, these groups could be combined to examine the overall comparison of participant
evaluations to the ideal, in which case the total group significantly under-represented the proportion of
disconfirmatory data and over-represented the proportion of confirmatory data.
This task required individuals to evaluate participant generated concepts in order to evaluate the validity
of a design hypothesis, a method that directly generalizes to design experiments involving the evaluation
of participant generated design concepts. In addition, the sample used in this study was entirely composed
of graduate students, a representative sample given that graduate research assistants often conduct the data
acquisition and coding process in academic research. Even though the participants were only briefly
exposed to literature on design fixation, and had no vested interest in validating the hypothesis, they still
over-represented the proportion of confirmatory information relative to the hypothetical ideal. These results
suggest that design researchers should take care to avoid the overrepresentation of data that confirms
hypotheses.
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
010 20 30 40
Disconfirmatory Evidence
Evaluated (%/100)
Time Spent Evaluating (min)
Matrix No-Matrix
30
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
3.5.2. Interpreting participant conclusions
Although participants may have exhibited a confirmatory bias in acquiring or reporting data for
evaluation, the influence of this bias on their final conclusions with regard to the hypothesis varied. The
participants were generally sensitive to the fact that fixation may have occurred in some concepts and not
others. Given that a phenomenon does not have to manifest in every possible instance to prove its existence
(e.g., the fact that only a small proportion of individuals are schizophrenic does not mean that schizophrenia
does not exist), participants who concluded the hypothesis was conditionally true or inconclusive arguably
reached valid conclusions. However, these groups differed in how they responded to the presence of
confirming and disconfirming evidence. The conditionally true group concluded that the hypothesis was
true in some cases because there were some confirmatory data; the inconclusive group determined that in
the presence of both confirmatory and disconfirmatory data, a firm conclusion could not be drawn. While
the conclusions seem similar, the conditionally true group interpreted the results in order to validate the
hypothesis, suggesting confirmation bias influenced their conclusions. In addition, 25% of individuals
concluded that the fixation-on-examples hypothesis was well supported by the data, even when the majority
of concepts provided showed very little evidence of fixation. Therefore, a confirmatory bias seems present
in the conclusions of half of the participants in this study.
Of the four participants who concluded the hypothesis was true, 3 performed the evaluation without the
matrix (see Table 3). The conclusions of participants 4, 8 and 10 seem logical considering they identified a
greater proportion of confirmatory evidence than disconfirmatory. However, participants 16 identified a
higher proportion of disconfirmatory evidence than confirmatory. This seemingly contradictory conclusion
may be a result of failing to properly evaluate the diagnostic value of contradictory evidence with respect
to the hypothesis, an effect observed in other information analysis tasks (Cheikes, et al., 2004) and a known
consequence of confirmation bias (Nickerson, 1998). Of the participants who concluded the hypothesis was
false, two were in the matrix group and two were in the no matrix group. Although participant 13 identified
more confirmatory evidence than disconfirmatory, all of these participants stated the hypothesis was only
valid if it was always true.
3.5.3. Empirical limitations
These conclusions are based on the assumption that an unbiased or ideal evaluator would determine that
the ratio of disconfirming to confirming evidence available from the six concepts was 2:1. This ratio is
based on an evaluation of the six concepts along four functional elements identified by Perttula and
Liikkanen (2006) in rating fixation on the example concept: 1) water source, 2) regulation of flow, 3) water
transfer, and 4) energy source. The four disconfirmatory concepts were independently rated as the least
fixated on the example solution out of 123 concepts from a previous fixation experiment (Hallihan & Shu,
2011). The two confirmatory concepts were designed by the lead author and incorporated three out of four
functional elements of the example, and had high aesthetic similarity.
Another limitation is the relatively small sample size. Increasing the sample size would likely lead to
decreased error variance and if the sample means are representative, both conditions would likely
significantly deviate from the ideal. However, this decrease in variance or increase in power would also
likely support the conclusion that the matrix group performed significantly better than the no matrix group.
These possibilities, as well as the trends observed here, offer areas for future research with larger samples.
Finally, several participants (6, 12, 7) were included in groups for analysis that they were not originally
assigned to. One option would have been to exclude these individuals from the analysis completely; if this
were done the same trends are observed, the matrix group’s deviation from the mean (M = 0.15, SD = 0.17)
approaches significance t(5) = 2.3, p = 0.07, and the no matrix group’s deviation (M = 0.22, SD = 0.14)
becomes “less” significant t(6) = 4.38, p = 0.005.
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
31
3.5.4. Effect of the matrix
The analysis does not demonstrate that the use of formal decision matrices significantly improved the
performance of the matrix group in direct relation to the no matrix group. This observation is consistent
with research from Cheikes, et al. (2004) involving the use of the ACH procedure with individuals
performing an accident analysis task. However, Brasfield (2009) observed that the ACH procedure did
mitigate confirmation bias in participants engaged in a political analysis task. In addition, we (Hallihan, et
al., 2012) previously suggested that the use of the matrix facilitated an increased time spent on task,
accounting for a significant difference in the amount of data evaluated between groups.
At this point it is difficult to explicitly state whether formal decision matrices (specifically those
resembling the analysis of competing hypotheses procedure) mitigate confirmation bias in information
analysis. Perhaps one benefit of this procedure is that it encourages individuals to actively consider multiple
hypotheses. In this way, it may make disconfirming cases or instances more salient and available, as that
information may be perceived as “confirming” one of the identified alternate hypotheses. This would be
congruent with other research suggesting that priming individuals to think counterfactually reduces
confirmation bias (Galinsky & Moskowitz, 2000).
4. Conclusion
In this paper, it is argued that confirmation bias is a pervasive phenomenon in human information
processing with the potential to influence the judgements of designers and design researchers. In some
instances, the influence of confirmation bias has the potential to result in erroneous conclusions or
undesirable decision outcomes. In the design process these outcomes may be a result of failing to see the
shortcomings of design solutions. These failures may be perpetuated through the interaction of confirmation
bias with other known design biases such as fixation or the misapplication of biological analogies, as well
as failing to respond to design feedback and overconfidence.
We have also argued that confirmation bias has the potential to influence design researchers. In the
experiment presented, it was argued that individuals have a tendency to over-represent evidence that
confirms research hypotheses and under-represent information that disconfirms them. However, this did
not directly translate into erroneous conclusions regarding the hypothesis. Further research is needed to
tease apart the relationship between the quantity of evidence evaluated and actual conclusions. One
confounding factor is the diagnostic weight individuals assign to evidence in the formation of conclusions.
This too may be influenced by confirmation bias (Cheikes, et al., 2004) but is unaccounted for in the present
research. Even if participants identified more confirmatory information than disconfirmatory, how
diagnostic that information is perceived to be with respect to the hypothesis likely had bearing on their final
conclusions.
4.1. Mitigating confirmation bias
The value of using formal decision matrices to mitigate confirmation bias is not clear. While this
research does not indicate they improve performance relative to intuition, there has been research to suggest
otherwise. In addition, a significant and positive correlation between the amount of disconfirmatory
evidence and time spent on the task was observed; the matrix group spent significantly more time evaluating
than the no matrix group. Therefore, spending more time on the task may have been facilitated by relying
on the matrix procedure, which in turn led participants to identify more disconfirmatory data relative to
confirmatory, reflecting a closer to optimal evaluation.
Decision matrices may also reduce reliance on confirmation bias in another way. Jonas et al. (2001)
found that when individuals evaluate information sequentially, they exhibit a stronger confirmation bias
than when they evaluate information simultaneously. The researchers suggest that sequential information
processing induces a focus on prior decisions, increasing commitment to them. Interestingly, in the ACH
procedure and the instructions given to participants in this experiment, individuals list all the relevant
32
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
information they can identify and then consider the diagnostic value of each piece of evidence relevant to
alternate conclusions. In this way, these procedures facilitate simultaneous information processing.
Another method to mitigate confirmation bias involves educating and training individuals. Nisbett and
Ross (1980) suggest that educating individuals about the nature of cognitive biases, and enhancing their
awareness of the processes underlying them, is an effective approach to mitigate biased cognition generally.
However, simply making individuals aware of the bias itself does not seem to be an effective method to
mitigate it (Burke, 2006).
In addition, it is possible that increased accountability and negative feedback could mitigate
confirmation bias. Negative feedback and increased accountability have been shown to decrease
overconfidence and strategy adherence (Arkes et al., 1987; Jermias, 2006) and as a result may similarly
reduce confirmation bias. Accountability and negative feedback may be incorporated as part of the peer
review process for design researchers, as well as into the design process. Silverman (1992) explores the use
of a critic embedded within CAD software to mitigate human error in the design process, and methods such
as this may be valuable if they could be effectively applied to encourage designers to think critically of their
own ideas. Similarly, this process may also prime individuals to think counterfactually, which can also
reduce confirmation bias (Galinsky & Moskowitz, 2000). However, Silverman (1992) suggests that shifting
to a disconfirmatory strategy did not account for the effectiveness of the critique method he examined.
4.2. Publication bias
We may be resistant to the idea that confirmation bias influences the academic community, however
evidence of publication bias (i.e., the over-representation of positive results in academic literature) indicates
a preference for confirmatory information does exist (e.g., see Easterbrook et al., 1991). Dickersin (1990)
suggests this bias results from investigators’, reviewers’, and editors’ preference to submit or accept papers
based on the strength and direction of their findings. In design, this could manifest as an over-representation
of studies that have been shown to prove the efficacy of interventions relative to studies that have found
null effects for the same intervention. Or consider the over-emphasis of anecdotal reports regarding
biological analogies as stimuli to inspire insight. Anecdotal examples are often cited to describe the
instances when the serendipitous observation of a natural phenomenon inspired a solution for a design
problem. However, those cases in which a relevant analogy was observed and failed to inspire insight fall
by the wayside. Failing to consider when methods are ineffective may make a comparison between alternate
methods less meaningful.
4.3. Summary
Future research is needed to more precisely characterize the influence of confirmation across a range of
design processes. However, at this point it seems that confirmation bias has the potential to negatively
influence design judgements and contribute to known design biases. Further research examining various
methods to mitigate confirmation bias (e.g., formal decision tools, education and training, critical feedback
and accountability) could provide valuable insights to enhance information processing in design and design
research.
Acknowledgments
The authors acknowledge the financial support of the Natural Science and Engineering Research
Council of Canada. The authors also thank those who participated in the experiments.
References
Arkes, H. R.& Blumer, C. (1985). The Psychology of Sunk Cost. Organizational Behavior and Human
Decision Processes, 35(1), 124-140. doi: http://dx.doi.org/10.1016/0749-5978(85)90049-4
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
33
Arkes, H. R., Christensen, C., Lai, C., & Blumer, C. (1987). Two Methods of Reducing Overconfidence.
Organizational Behavior and Human Decision Processes, 39(1), 133-144. doi:
http://dx.doi.org/10.1016/0749-5978(87)90049-5
Aurup, G. M.& Akgunduz, A. (2012). Pair-Wise Preference Comparisons Using Alpha-Peak Frequencies.
Journal of Integrated Design and Process Science, 16(4), 3-18. doi: http://dx.doi.org/10.3233/jid-
2012-0021
Bernoulli, D. (1738). Exposition of a New Theory on the Measurement of Risk. Econometrica: Journal of
the Econometric Society, 22(1), 22-36.
Brasfield, A. D. (2009). Forecasting Accuracy and Cognitive Bias in the Analysis of Competing
Hypotheses. M.Sc., Mercyhurst College.
Burke, A. (2006). Improving Prosecutorial Decision Making: Some Lessons of Cognitive Science. William
Mary Law Review, 47, 1587-1634.
Burke, A. (2007). Neutralizing Cognitive Bias: An Invitation to Prosecutors. NYU Journal of Law &
Liberty, 2, 512-530.
Busby, J. S. (1999). The Problem with Design Reuse: An Investigation into Outcomes and Antecedents.
Journal of Engineering Design, 10(3), 277-296.
Cardoso, C.& Badke‐Schaub, P. (2011). Fixation or Inspiration: Creative Problem Solving in Design. The
Journal of Creative Behavior, 45(2), 77-82.
Cheikes, B. A., Brown, M. J., Lehner, P. E., & Adelman, L. (2004) Confirmation Bias in Complex Analyses.
MITRE Center for Integrated Intelligence Systems. Bedford, MA.
Cheong, H., Hallihan, G., & Shu, L. H. (2012). Understanding Analogical Reasoning in Biomimetic
Design: An Inductive Approach. Fifth International Conference on Design Computing and Cognition,
College Station, TX.
Cosmides, L. (1989). The Logic of Social Exchange: Has Natural Selection Shaped How Humans Reason?
Studies with the Wason Selection Task. Cognition, 31(3), 187-276.
Dickersin, K. (1990). The Existence of Publication Bias and Risk Factors for Its Occurrence. JAMA: the
Journal of the American Medical Association, 263(10), 1385-1389.
Dunbar, K. (1997). How Scientists Think: On-Line Creativity and Conceptual Change in Science. In T. B.
Ward, S. M. Smith & J. V. Smith (Eds.), Creative Thought: An Investigation of Conceptual Structure
and Processes (1 ed., pp. 567). Washington, DC: American Psychological Association.
Easterbrook, P. J., Gopalan, R., Berlin, J. A., & Matthews, D. R. (1991). Publication Bias in Clinical
Research. The Lancet, 337(8746), 867-872.
Fiske, S. T.& Taylor, S. E. (1984). Social Cognition (1 ed.). MA: Addison-Wesley Pub. Co.
Foster, C. (2012). Improving Failure Mode and Effects Analysis as a Cognitive Simulation. ASME
IDETC/CIE, Chicago, IL.
Friedrich, J. (1993). Primary Error Detection and Minimization (Pedmin) Strategies in Social Cognition: A
Reinterpretation of Confirmation Bias Phenomena. Psychological Review, 100(2), 298-319.
Galinsky, A. D.& Moskowitz, G. B. (2000). Counterfactuals as Behavioral Primes: Priming the Simulation
Heuristic and Consideration of Alternatives. Journal of Experimental Social Psychology, 36(4), 384-
409.
Gero, J. S. (1990). Design Prototypes: A Knowledge Representation Schema for Design. AI Magazine,
11(4), 26-36.
Green, D. M.& Swets, J. A. (1966). Signal Detection Theory and Psychophysics (Vol. 1). New York, NY:
Wiley.
Hallihan, G., Cheong, H., & Shu, L. H. (2012). Confirmation and Cognitive Bias in Design Cognition.
ASME IDETC/CIE, Washington, DC.
34
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
Hallihan, G.& Shu, L. H. (2011). Creativity and Long-Term Potentiation: Implications for Design. ASME
IDETC/CIE, Chicago, IL.
Heuer, R. J. (1999). Psychology of Intelligence Analysis. Washington, DC: Center for the Study of
Intelligence - CIA.
Isenberg, D. J. (1988). How Senior Managers Think. In D. Bell, H. Raiffa & A. Tversky (Eds.), Decision
Making: Descriptive, Normative, and Prescriptive Interactions. New York, NY: Cambridge
University Press.
Jansson, D. G.& Smith, S. M. (1991). Design Fixation. Design Studies, 12(1), 3-11.
Jermias, J. (2006). The Influence of Accountability on Overconfidence and Resistance to Change: A
Research Framework and Experimental Evidence. Management Accounting Research, 17(4), 370-388.
Johnson‐Laird, P. N., Legrenzi, P., & Legrenzi, M. S. (1970). Reasoning and a Sense of Reality. British
Journal of Psychology, 63(3), 395-400.
Jonas, E., Schulz-Hardt, S., & Frey, D. (2001). Confirmation Bias in Information Seeking under
Simultaneous Vs. Sequential Processing. Journal of Experimental Applied Psychology, 48(3), 239-
247.
Katsikopoulos, K. V. (2012). Decision Methods for Design: Insights from Psychology. [Technical Brief].
Journal of Mechanical Design, 134(8).
Klayman, J.& Ha, Y.-W. (1987). Confirmation, Disconfirmation, and Information in Hypothesis Testing.
Psychological Review, 94(2), 211-228.
Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for Confidence. Journal of Experimental
Psychology: Human Learning and Memory, 6(2), 107-118.
Krems, J. F.& Zierer, C. (1994). Are Experts Immune to Cognitive Bias? Dependence of "Confirmation
Bias" on Specialist Knowledge. Zeitschrift für Experimentelle und Angewandte Psychologie, 41(1),
98-115.
Lehner, P. E., Adelman, L., Cheikes, B. A., & Brown, M. J. (2008). Confirmation Bias in Complex
Analyses. IEEE Transactions on Systems, Man and Cybernetics, 38(3), 584-592.
Linsey, J. S., Tseng, I., Fu, K., Cagan, J., Wood, K. L., & Schunn, C. (2010). A Study of Design Fixation,
Its Mitigation and Perception in Engineering Design Faculty. Journal of Mechanical Design, 132(4),
1-12.
Luh, D., Ma, C., Hsieh, M., & Huang, C. (2012). Using the Systematic Empathic Design Method for
Customer-Centered Products Development. Journal of Integrated Design and Process Science, 16(4),
31-54.
Maxwell, T. T., Ertas, A., & Tanink, M. M. (2002). Harnessing Complexity in Design. Journal of Integrated
Design and Process Science, 6(3), 63-74.
Nguyen, T.& Zeng, Y. (2012). A Theoretical Model of Design Creativity: Nonlinear Design Dynamics and
Mental Stress-Creativity Relation. Journal of Integrated Design and Process Science, 16(3), 65-88.
Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General
Psychology, 2(2), 175-220.
Nisbett, R. E.& Ross, L. (1980). Human Inference: Strategies and Shortcomings of Social Judgment.
Englewood Cliffs, NJ: Prentice-Hall.
Otto, K. N.& Wood, K. L. (1999). Customer Integrated Systematic Design. Journal of Integrated Design
and Process Science, 3(4), 61-74.
Perttula, M. K.& Liikkanen, L. A. (2006). Structural Tendencies and Exposure Effects in Design Idea
Generation. ASME IDETC/CIE, Philadelphia, PA.
Pines, J. M. (2006). Profiles in Patient Safety: Confirmation Bias in Emergency Medicine. Academic
Emergency Medicine, 13(1), 90-94.
Hallihan and Shu / Considering Confirmation Bias in Design and Design Research
35
Shu, L. H. (2012). Obstacles to Creativity in Biomimetic Design. Proceedings of Fifth International
Conference on Design Computing and Cognition, College Station, TX. Workshop retrieved from
Silverman, B. G. (1985). Software Cost and Productivity Improvements: An Analogical View. IEEE
Transactions on Systems, Man and Cybernetics, 18(5), 86-96.
Silverman, B. G. (1992). Modeling and Critiquing the Confirmation Bias in Human Reasoning. IEEE
Transactions on Systems, Man and Cybernetics, 22(5), 972-982.
Silverman, B. G.& Mezher, T. M. (1992). Expert Critics in Engineering Design: Lessons Learned and
Research Needs. AI magazine, 13(1), 45-62.
Spedding, J., Ellis, R., & Heath, D. (1863). The Works of Francis Bacon (Vol. Viii Translations of the
Philosophical Works) (Vol. VIII). Boston, MA: Taggard and Thompson.
Tversky, A.& Kahneman, D. (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge, UK:
Cambridge University Press.
Viswanathan, V. K.& Linsey, J. S. (2011). Design Fixation in Physical Modeling: An Investigation on the
Role of Sunk Cost. ASME IDETC/CIE, Washington, DC.
Wason, P. C. (1960). On the Failure to Eliminate Hypotheses in a Conceptual Task. Quarterly Journal of
Experimental Psychology, 23(3), 129-140.
Wason, P. C. (1968). Reasoning About a Rule. Quarterly Journal of Experimental Psychology, 20(3), 273-
281.
Wickens, C. D.& Hollands, J. G. (2000). Engineering Psychology and Human Performance (3 ed.). Upper
Sadle River, NJ: Prentice Hall Inc.
Yan, B.& Zeng, Y. (2011). Design Conflict: Conceptual Structure and Mathematical Representation.
Journal of Integrated Design and Process Science, 15(1), 75-89.
Author Biographies
Gregory Hallihan obtained his MASc in Mechanical and Industrial engineering at the University of
Toronto and his Bachelors in Psychology at the University of Calgary. His research focused on cognitive
biases in design.
L.H. Shu is an Associate Professor in the Department of Mechanical and Industrial Engineering at the
University of Toronto, where she held the Wallace G. Chalmers Chair of Engineering Design. She obtained
her PhD and SM from MIT and her BS from the University of Nevada, all in mechanical engineering. She
is active in ASME Design Theory and Methodology, is a fellow of CIRP and received the F.W. Taylor
Medal Award in 2004. Her research interests include creativity in conceptual design, systematic
identification and application of biological analogies in biomimetic (biologically inspired) design, and
identifying and overcoming obstacles to personal environmentally conscious behavior.