ArticlePDF Available

Abstract and Figures

Risk management is a critical part of engineering practice in industry. Yet, the attitudes of engineers toward risk remains an unknown and is not measured. This paper presents the development of a psychometric scale, the Engineering-Domain-Specific Risk-Taking (E-DOSPERT) test, to measure engineers' risk aversion and risk seeking attitudes. Consistent with a similar psychomet-ric scale to assess general risk attitudes, engineering risk attitude is not single domain and is not consistent across domains. Engineers have different risk attitudes toward five identified domains of engineering risk: Processes, Procedures and Practices; Engineering Ethics; Training; Product Functionality and Design; and Legal Issues. Psychometric risk profiling with E-DOSPERT provides companies a standard to assess domain-specific engineering risk attitude within organizations and across organizations. It provides engineering educators a standard to assess the understanding of engineering students to the types of risks they would encounter in professional practice and their personal attitude toward responding to those risks. Appropriate interventions can then be implemented to shape risk attitudes as appropriate. Risk-based design decisions can also be shaped by a better un-derstanding of engineer and customer risk attitude. Understanding engineers' risk attitudes is crucial in interpreting how individual engineers will respond to risk in their engineering activities and the numerous design decisions they make across the various domains of engineering risk found in professional practice.
Content may be subject to copyright.
On Measuring Engineering Risk Attitudes
Douglas L. Van Bossuyt
Complex Engineered Systems
Design Laboratory
School of Mechanical, Industrial
and Manufacturing Engineering
Oregon State University
Corvallis, OR 97331
Email: Douglas.VanBossuyt@gmail.com
Andy Dong
Faculty of Engineering and Information Technologies
University of Sydney
Sydney, NSW, 2006, Australia
Email: Andy.Dong@sydney.edu.au
Irem Y. Tumer
Complex Engineered Systems
Design Laboratory
School of Mechanical, Industrial
and Manufacturing Engineering
Oregon State University
Corvallis, OR 97331
Email: Irem.Tumer@oregonstate.edu
Lucila Carvalho
Faculty of Education and Social Work
University of Sydney
Sydney, NSW, 2006, Australia
Email: Lucila.Carvalho@sydney.edu.au
Risk management is a critical part of engineering practice in in-
dustry. Yet, the attitudes of engineers toward risk remains an un-
known and is not measured. This paper presents the development
of a psychometric scale, the Engineering-Domain-Specific Risk-
Taking (E-DOSPERT) test, to measure engineers’ risk aversion
and risk seeking attitudes. Consistent with a similar psychomet-
ric scale to assess general risk attitudes, engineering risk attitude
is not single domain and is not consistent across domains. Engi-
neers have different risk attitudes toward five identified domains
of engineering risk: Processes, Procedures and Practices; En-
gineering Ethics; Training; Product Functionality and Design;
and Legal Issues. Psychometric risk profiling with E-DOSPERT
provides companies a standard to assess domain-specific engi-
neering risk attitude within organizations and across organiza-
tions. It provides engineering educators a standard to assess the
understanding of engineering students to the types of risks they
would encounter in professional practice and their personal atti-
tude toward responding to those risks. Appropriate interventions
can then be implemented to shape risk attitudes as appropriate.
Risk-based design decisions can also be shaped by a better un-
derstanding of engineer and customer risk attitude. Understand-
ing engineers’ risk attitudes is crucial in interpreting how in-
dividual engineers will respond to risk in their engineering ac-
tivities and the numerous design decisions they make across the
various domains of engineering risk found in professional prac-
tice.
1 INTRODUCTION
Risk is an integral part of engineering design. Risk propen-
sity is often considered an essential ingredient for innovative de-
Address all correspondence to this author. A version of this paper appeared
in the Proceedings of the 2011 International Design Engineering Technical Con-
ference & Computers in Engineering Conference, 23rd International Conference
on Design Theory and Methodology.
sign, perhaps best exemplified in the IDEO motto “Fail often to
succeed sooner,” implying a willingness to take risks early in the
design process to allow a product concept to fail, thereby en-
abling learning. On the other hand, risk aversion pervades cer-
tain industries, such as power generation and aerospace. There is
no one correct risk attitude across all engineering sectors, and an
action or event one engineer thinks is ’risky’, another engineer
may not [1]. Rather, risk is an issue that must be managed.
Risk and reliability engineers manage risks by identifying
the potential sources of risks and then finding ways to mitigate
those risks. Within engineering design, there is no shortage of
methods to identify the risk of failure of components [2, e.g.].
Standards such as ISO 31000:2009 [3] prescribe a framework for
managing risk. ISO 31000:2009 is the International Organization
for Standardization risk management principles and guidelines
standard. The standard systematically lays out the principles be-
hind risk management and outlines guidelines for risk manage-
ment practitioners to follow. The standard identifies four aspects
to risk management: risk identification, risk analysis, risk evalu-
ation, and risk treatment. Having been implemented in the man-
agement of engineering, the framework has only recently been
applied to risk management in product design [4]. While the
standard prescribes effective principles and guidelines for orga-
nizations to establish risk management policies and procedures,
it, like formal engineering risk analysis methods, falls short in
the assessment of organizational and personal attitudes to engi-
neering risk.
Understanding the risk attitudes of engineers is useful for
several reasons. By understanding the risk attitudes of engineers,
training can be conducted to harmonize an engineer’s perception
of risk – individual, subjective judgment of the severity and char-
acteristics of a risk – and risk attitude – the amount of risk that is
willingly taken on in order to realize a gain – with the company’s
risk perception and risk attitude. In systems engineering, under-
standing individual engineers’ risk perception and attitude holds
the promise of helping engineers to collaborate more effectively
and deliver a higher utility product with a lower development cost
and shorter development time [5]. Risk and reliability engineer-
ing stand to benefit from knowing risk attitude. Expert judgment
is directly affected by how engineers perceive risk and their risk
attitudes. By understanding individual risk perceptions and atti-
tudes, risk experts can explicitly normalize their expert opinions
with peers [1].
This paper presents the E-DOSPERT test, a psychometric
scale to assess engineering risk attitude, which is an engineer’s
mental response to the perception of uncertainty of objectives
that matter [6]. The research is motivated by an aim for a stan-
dard for the assessment of engineering risk attitudes to com-
plement risk management frameworks. The E-DOSPERT test
is modeled after the Domain-Specific Risk-Taking (DOSPERT)
test [7, 8]. The DOSPERT test is quickly becoming the most
preferred risk attitude scale in psychology for its predictive abil-
ities of future actions based on risk attitude and ability to show
whether observed risk behavior is based upon the person’s per-
ception of risk or the person’s attitude toward the perceived risk.
The DOSPERT test has demonstrated both a high level of reli-
ability and construct validity while conclusively showing that a
person’s attitude toward risks associated with financial decisions
will differ from their risk attitude toward social activities among
several domains of risk. This paper addresses two research ques-
tions related to the development of E-DOSPERT. The first ques-
tion is whether engineering risk is domain dependent. Version A
of the E-DOSPERT test was constructed based upon principles
and guidelines in the ISO 31000:2009 standard on risk manage-
ment to address this question [3, 9]. If it is true that risk atti-
tudes in engineering are domain dependent, the second research
question concerns the domains of engineering risk. We identify
these domains through an exploratory factor analysis of the re-
sults from Version A and Version B of the E-DOSPERT test.
The following sections present necessary background mate-
rial on the DOSPERT test, the psychology of risk, and risk in
engineering. A method for the creation of the E-DOSPERT scale
is presented. Testing and validation results are reviewed and dis-
cussed. This paper concludes with discussion of future work and
implications of the E-DOSPERT scale.
2 Background
Risk can be defined in a variety of ways. Alternative def-
initions of risk and how those definitions relate to methods for
assessing risk attitudes are briefly examined in the following sec-
tion.
2.1 The Psychology of Risk Attitude
The ‘classic’ definition of risk is the parameter that differen-
tiates between the utility functions of different individuals [10].
Utility functions are representations of the preference or value
that individuals place upon event outcomes. The utility function
of individuals is generally expressed as an exponential, quadratic,
or logarithmic curve [10–12]. The Expected Utility (EU) hypoth-
esis theorizes that the preference of an individual choosing be-
tween risky options can be determined by a function of the return
of each option, the probability of that option coming to fruition,
and the individual’s risk aversion [13]. The EU framework and
related methods including prospect theory [14] traditionally view
the curves of an individual’s utility function as denoting either
risk aversion or risk seeking. The definition of risk aversion in
the context of risk attitudes is framed in the context of someone
who prefers to take the expected value of a gamble over playing
the gamble as being a person who does not like to take risks [15].
As a result, risk attitude can be defined as a person’s position
on the risk aversion-risk seeking axis and is thought of as a per-
sonality trait. Hillson and Murray-Webster [6] further refine this
risk aversion-risk seeking scale by inserting a mid-point“risk tol-
erant” as being comfortable with uncertainty and able to handle
the uncertainty if necessary and by including the category “risk
neutral” as taking necessary short-term actions to deliver certain
long-term outcomes.
However, two issues have arisen that challenge the idea of
risk attitudes in the context of EU being a personality trait: cross-
method utility instability and inconsistent risk profiles across
risk domains. When different methods are employed to mea-
sure people’s utility, different classifications of risk-taking or
risk aversion often result [16]. Further, individual respondents
are not consistently risk averse or risk seeking across different
risk domains [17]. For example, managers have been found to
have different risk attitudes when evaluating financial and recre-
ational risks, and when using company money versus personal
money [18].
The concept of relative risk attitude was introduced in an
attempt to identify the component of risk-taking that has cross-
situational stability for individuals [19]. The hypothesis was that
the domain differences in apparent risk attitudes might be as a
result of domain-specific outcome marginal values. With the
marginal values factored out, stability across domains was ex-
pected. However, this was not the case under further review. No
evidence was found of cross-situational relative risk attitude sta-
bility in empirical studies [20].
The validity of EU-based risk attitude assessment is limited
due to these issues. There has been little success in predicting
individuals’ choices and behaviors in domains not assessed by
EU-based instruments [21]. Even with the limitations of EU-
based survey instruments, many are still in use. A more recent
method of determining risk attitude takes inspiration from the
world of finance [22]. The risk-return framework of risky choice
assumes people’s preferences for risky options reflects a trade-off
between riskiness of a choice and the Expected Value (EV). The
financial world equates riskiness of an option with its variance.
In risk-return models, perceived riskiness is treated as a variable
that can be different between individuals due to differences in
individuals’ content and context interpretations [20,23].
The risk-return framework allows for people to have simi-
lar perceptions of risk and return between different domains but
in one domain prefer risk while in another domain prefer cau-
tion [7]. Having such preferences and perceptions would result
in different outcomes, as the risk-return framework predicts. The
term perceived risk attitude, previously conceptualized as risk-
repugnance [24], was coined to reflect the assumption that risk
in its pure form is negative and undesirable but that perceived
risk might be attractive to some individuals in certain domains
and circumstances [25]. Variances in perceived risk attitude are
thus a result of discrepancies between the perception of the risks
and benefits as determined by a decision-maker and an outside
observer. This is exemplified in research conducted in the man-
agement field where what differentiates between entrepreneurs
and managers is a highly optimistic perception of risk on the part
of the entrepreneurs rather than a greater preference for risk, as
one might expect [26].
Many studies have highlighted differences in the perception
of the riskiness of decisions in individuals, between groups, and
between cultures [27, 28]. Differences in risk perception have
also been found due to outcome framing [29]. In the context of
risk-return based models, perceived risk attitude has been found
to have cross-situation and cross-group consistency when dif-
ferences in the perception of riskiness are factored out [7, 23].
Rather than differences in risk attitude, risk-return models sug-
gest that the way people perceive risk affects the choice out-
comes.
To assess risk perceptions and attitude toward perceived
risk in different domains of risk, Weber et al. developed the
DOSPERT test and related scale [7, 8]. Six independent do-
mains of risk were identified including ethical, investment, gam-
bling, health/safety, recreational, and social domains. Four of
the domains were originally identified based upon the risk-taking
behavior literature [30] while the fifth and sixth domains were
found through analysis of survey results where the financial do-
main was split into investment and gambling domains [7], which
were suggested in previous research [18, 31]. Risk-taking was
found to be highly domain-specific between the identified do-
mains. Individual respondents were risk averse in some domains
and risk-neutral or risk seeking in others. Respondents were
found to not be consistently risk averse or risk seeking across
the six domains.
It was also found that preference for risk seeking or risk
aversion was influenced by the perceived benefits and risks of
the activity in question. This resulted in identifying two psycho-
logical variables including risk perception and attitude toward
perceived risk, which are consistent with risk-return based mod-
els [26]. Risk perception relates only to the extent to which a
person sees risk in situations that contain uncertainty over the
outcome or consequences whereas risk attitude refers to the like-
lihood in engaging in an activity or being in situations that con-
tain uncertainty over the outcome or consequences. Previous risk
attitude indices have been confounded by not distinguishing be-
tween the two psychological variables of risk perception and atti-
tude toward perceived risk [32]. Distinguishing between the risk
perception and risk attitude variables is largely irrelevant if only
prediction of future actions is desired. However, the distinction
between these variables becomes important when risk-taking is
assessed with the goal of changing risk-taking behavior [7].
Since the DOSPERT scale was developed and validated,
many other studies have replicated the results. Strong correlation
was found with the various subscales of Bunder’s scale for intol-
erance [33] and with Zuckerman’s sensation-seeking scale [34].
Paulhus’ social desirability scale [35] was found to have signif-
icant correlation between the impression management subscale
and the ethics and health/safety subscales of DOSPERT. Thus,
the DOSPERT scale was found to have favorable correlations
with established scales. The DOSPERT scale has also been trans-
lated into several different languages and contexts including the
DOSPERT-G scale, a German-language version [36], a French-
language DOSPERT scale [37], and others [8]. Other scales aim-
ing to measure domain-dependent risk attitudes developed since
DOSPERT was introduced have not found widespread adoption.
The DOSPERT scale is quickly becoming the most preferred
risk attitude scale in psychology for its predictive abilities and
its ability to show whether observed risk behavior is based upon
the person’s perception of risk or the person’s attitude toward the
perceived risk, which allows for intervention and behavior mod-
ification.
2.2 An Engineering Definition of Risk Attitude
The definition and application of risk in engineering is more
straight-forward than in psychology. The ISO 31000:2009 doc-
ument [3] defines risk as the effect of uncertainty on objectives.
An effect is a positive or negative deviation from the expected.
Objectives are defined as having different aspects such as en-
vironmental, health and safety, and financial goals, and can be
applied at different levels of a project or organization. The ISO
31000:2009 definition of risk is further defined as the probability
of occurrence of an event multiplied by the severity of the conse-
quences. It should be noted that uncertainty is often defined as a
lack of knowledge about system specifications and errors result-
ing from imperfect models [38]. Some researchers further break
down uncertainty into multiple subcategories that often contain
elements of risk, reliability, and robustness [39]. For the pur-
poses of this research, the ISO 31000:2009 definition of risk shall
be used in the context of engineering.
If this is used as the operating definition of risk, then risk
attitude in engineering is the ‘state of mind’ of the engineer in
response to the perception of uncertainty on objectives [6]. The
engineer’s attitude will influence actions, or inactions, taken. The
behavior an engineer takes toward risk can be to retain, pursue,
take, or turn away from that risk. In other words, when presented
with a situation, it is important to determine how the engineer’s
risk attitude will influence behavior rather than simply whether
the engineer perceives a situation as being risky.
To assess this behavior, the ISO 31000:2009 document for
the standard of risk management was applied as the initial basis
for assessing behavior toward risk management, that is, the en-
gineer’s attitude to perceived risk and, simply, ‘what they would
do’. The ISO 31000:2009 document [3] specifically prescribes
four key factors in risk management: risk identification, risk
analysis, risk evaluation, and risk treatment. Risk Identification
is defined as the process of finding, recognizing, and describing
risks. Risk Analysis is the process of comprehending the nature
of a risk and determining the associated level of risk. Risk Evalu-
ation is the process of comparing the results of risk analysis with
the significance of the risk as compared to a reference risk scale.
Risk Treatment is the process of dealing with a risk. Each of
these aspects of risk management may also be considered theo-
retical risk domains because they cover the range of conditions
associated with increased probability of outcomes that compro-
mise the certainty of objectives. Each domain has a direct effect
on risk behavior and is a separate source of risk.
3 Hypotheses and Scale Development
The first research question addressed by this paper is
whether engineering risk is domain-dependent. If engineering
risk is domain-dependent, then risk-adjusted utility curves should
attend to the domain to which the curve is applied, and the risk
adjustment should differ when considering, for example, product
functionality or project completion. While the standard practice
of reliability engineering is to use the expected value theorem,
which dictates a risk-neutral approach, if domain differences in
risk attitudes exist [7, 8], it is hypothesized that engineers will
have risk attitudes that are not risk neutral and are, instead, spe-
cific to a domain of engineering risk.
Hypothesis 1. Engineers will exhibit risk attitudes that differ
by domain of risk.
In order to test this hypothesis, the E-DOSPERT test
was developed, as outlined in Section 4, based upon the ISO
31000:2009 standard for risk management. If this hypothesis
is true, there will be statistically significant differences in risk
attitude across domains.
The second research question concerns the domains of engi-
neering for which engineers exhibit differences in risk attitude.
For brevity, we refer to these as the domains of engineering risk.
Given the importance of risk attitude for the effective imple-
mentation of a risk management framework, we based the ini-
tial E-DOSPERT test on each of the prescribed four key factors
in risk management. We hypothesize four domains, which are
risk identification, risk analysis, risk evaluation, and risk treat-
ment, as defined in ISO 31000:2009. If these four are the correct
domains of engineering risk, then items, that is, test questions re-
garding domain-specific risky activities, will load onto the appro-
priate factor in an exploratory factor analysis. Otherwise, other
domains of engineering exist for which risk attitude will differ.
Factor analysis is a statistical approach to group like items to-
gether on different dimensions. Independence between the dif-
ferent dimensions is achieved when the dimensions are orthogo-
nal to one another in multi-dimensional space. Factor analysis is
a statistical analysis technique that aims to explain how a number
of variables are related to one another because they share one or
more common factors. The analysis technique relies on identify-
ing latent variables [40,41].
Hypothesis 2. Engineering risk attitude differs across the four
content domains of engineering risk, which are risk identifica-
tion, risk analysis, risk evaluation, and risk treatment as defined
by ISO 31000:2009.
Hypothesis 3. Engineers have different risk attitudes across
the four content domains of engineering risk.
As the DOSPERT scale has been tested on individuals from
various national cultures, we decided to examine whether en-
gineering risk attitudes would likewise be consistent across na-
tional cultures. Engineering risk attitude data was collected from
mechanical, industrial, and manufacturing engineering students
at Oregon State University and students enrolled in a mechatron-
ics program at the University of Sydney (Australia). There is
no theoretical or empirical evidence to suggest that attitudes to-
ward engineering risk would differ across national boundaries,
and thus we predict that there would be no statistically signifi-
cant difference in engineering risk attitudes between Australian
and American engineers.
Hypothesis 4. There are no major differences in engineering
risk attitudes between Australian and American engineers.
Finally, we confirm that the E-DOSPERT test establishes a
unidimensional scale for risk attitude, ranging from risk-averse
to risk-seeking. If the scale is unidimensional, then individu-
als would answer inversely worded paired questions consistently.
That is, they would respond to a question worded as risk-seeking
exactly opposite to the way that they would if the question were
worded as risk-averse.
Hypothesis 5. Engineering risk attitudes can be measured on
a unidimensional scale (risk averse to risk seeking).
In order to test Hypothesis 5, the initial version of the
E-DOSPERT test contained paired inversely worded questions.
A total of 25 questions were intentionally inversely phrased.
In Section 4, we present Version A of the E-DOSPERT risk-
attitude scale, which consists of a number of items in the four
content domains of engineering risk. In addition to addressing
the first research questions and hypotheses related to the domain-
dependence of engineering risk and national differences in en-
gineering risk attitude, we document the reliability and unidi-
mensionality of the E-DOSPERT test. We next perform an ex-
ploratory factor analysis to verify the item loadings onto the four
domains. Study 2 documents Version B of the E-DOSPERT test
based on the results of the exploratory factory analysis of Version
A to address the second research question. The paper closes with
a discussion of the significance of the research findings, impli-
cations for practitioners, and current and future research on the
development of a scale for engineering risk attitude.
4 Initial E-DOSPERT Scale Development
This section documents the development of Version A of the
E-DOSPERT test, including respondent consistency tests using
replicated and paired questions and reliability based on values
of Cronbach’s alpha. Cronbach’s alpha is a measure of internal
consistency of a set of related questions [42]. The authors con-
ducted an exploratory factor analysis to determine whether the
four domains identified from the ISO31000:2009 document un-
derlie the risk behavior judgments (Hypothesis 2), to determine if
engineers have engineering-specific risk attitudes that vary from
the expected value theorem (Hypothesis 1 and between domains
(Hypothesis 3, and to determine whether engineering students
in Australia and America have similar engineering risk attitudes
(Hypothesis 4). Further analysis was conducted to determine
whether engineering risk attitude sub-scales are unidimensional
(Hypothesis 5.
4.1 Initial Scale Development Method
Version A of the E-DOSPERT test contains survey ques-
tions (items) based upon the ISO 31000:2009 definitions of the
four aspects of risk management and associated recommended
activities. Questions associated with domain-specific risky activ-
ities were developed for each of the domains: risk identification,
risk analysis, risk evaluation, and risk treatment. Respondents
were asked to evaluate their likelihood of engaging in a risky
or non-risky activity. Questions were worded such that a risk-
averse activity would be based upon known best-practice and/or
standards and a risk-seeking activity would violate generally ac-
cepted practices. Usefully, the ISO 31000:2009 standard pro-
vides descriptions of the types of activities that should be un-
dertaken in an effective framework for risk management. Rec-
ommended activities, which are considered the risk-averse ac-
tions, associated with risk management become the basis for cre-
ating items in the E-DOSPERT test. Thus, if an engineer believes
he/she is likely to engage in the non-risky activity, he/she would
be more risk-averse, and vice-versa. In addition, the authors’
knowledge of common professional mechanical and manufactur-
ing engineering-related situations involving risk was utilized to
develop items. The authors used an iterative peer-review pro-
cess and limited pilot surveying in the development of the bank
of questions, which is in line with standard survey development
practices [7, 8]. The authors independently generated banks of
candidate survey questions based upon the information found in
the ISO 31000:2009 standard, critiqued one another’s questions,
edited the questions as they deemed necessary, and iterated upon
the process until a bank of satisfactory questions was obtained.
Outside review was conducted by a small group of research as-
sociates to cross-check the questions for errors and to ensure that
the questions were worded as intended. A small pilot survey was
conducted with the cooperation of a pool of interested gradu-
ate students and professors. Limited follow-up interviews were
conducted to ensure that question meanings were correctly in-
terpreted. Revisions to questions were made as necessary. The
items in the test present respondents with typical scenarios or
tasks they would encounter in relation to each of the domains
of engineering risk. Their risk judgments toward these scenarios
or tasks should be influenced by their risk attitude. For exam-
ple, the engineer may (less risky) or may not (more risky) have
a process to identify risks by having a process in place to record
all failure data for a component in a system. In order to esti-
mate the likelihood of occurrence of an event, an engineer might
trust informed estimation (risky) instead of experimentation (less
risky). In evaluating the risk based on this estimation, the engi-
neer might place more weight on a regularly occurring minor
fault (less risky) than a severe one that may never occur (more
risky). To treat the risk, the engineer may operate the associ-
ated machinery far below the limits of safety (more risky) rather
than seek repairs (less risky). The likelihood of engaging in any
of these activities depends on the degree to which the engineer
avoids or seeks out risk. Each domain and associated questions
are briefly described below.
The risk identification portion of the ISO 31000:2009 stan-
dard recommends comprehensive identification of risks. The
identification of risks entails generating the set of events that may
detract from the achievement of desired objectives. The authors
considered ways in which risk events could be generated and how
new risks may be introduced but not identified. Sample questions
for risk identification include:
“not having complete data on the probability of failure for
each component in a system”
“introducing a design change (i.e., a new type of screw)
without full documentation because you think it’s a minor
change”
Risk analysis comprises the set of activities associated with
understanding the risk factors, the magnitude of consequences,
and the likelihood of consequences. The authors considered
different ways in which this information could be generated,
how divergent stakeholder opinions should be canvassed, and the
types of instruments and technologies associated with engineer-
ing analysis and how they can introduce risk into risk analysis.
Sample questions include:
“not trusting informed estimations of probabilities in a struc-
tured decision making process”
“accepting the results of computational simulation and analy-
sis without experimental corroboration of results”
Risk evaluation examines the data from risk analysis by
comparing the level of risk found during risk analysis to the ac-
ceptable level of risk. Acceptable levels of risk may come from
company policy or industry standards. The authors generated
sociotechnical methods for risk evaluation, considered ways in
which evaluations can be biased, and simple, hypothetical situa-
tions of risk evaluation. Sample questions include:
“placing more weight on a major fault that occurs on a regular
basis than one that may never occur”
“using a technology with a lower failure rate than another one
but at the expense of functionality”
Finally, risk treatment deals with actions taken to mitigate,
eliminate or modify the source of risk or its consequences. Sam-
ple questions include:
“staying quiet about your company’s cover up of a significant
design flaw”
“operating machinery well below capacity and far within the
limits of safety”
In Version A of the E-DOSPERT test, the original Likert
scale [43] used in the DOSPERT test [7] was employed to mea-
sure the likelihood of engaging in a risky (or non-risky) behavior.
The scale ranges from 1 to 5 with 1 corresponding to “very un-
likely”, 2 corresponding to “unlikely”, 3 corresponding to “not
sure”, 4 corresponding to “likely”, and 5 corresponding to “very
likely” to engage in an activity related to risk identification, anal-
ysis, evaluation and treatment. The questions were not grouped
by domain. We kept the mid-point as “not sure” to maintain con-
sistency with the original DOSPERT test. Some have argued that
the middle-point should be “neutral” and an “undecided” or “not
sure” option should also be available to respondents [44]. Offer-
ing both mid-point and not sure response options, termed Non-
Substantive Responses (NSRs) [45], has been found to change
the results of opinion surveys [46, 47]. In spite of the evidence
that NSRs should be used in surveys, the middle point on the
E-DOSPERT scale was chosen to be “not sure”. This avoided
confusion between the DOSPERT test and E-DOSPERT test in
the event that both tests are administered in succession to respon-
dents. Not using both NSRs allows for direct comparison be-
tween DOSPERT and E-DOSPERT results. Finally, the concept
of “neutral” as in a risk neutral risk attitude is about taking short-
term action to secure a certain long-term outcome [6], and this is
not the same as being risk neutral in the EU framework. Thus,
using the term “neutral” would not be appropriate. The term “not
sure” more closely matches the situation of risk tolerant, which
is considered the mid-point between risk seeking and risk averse
in the Hillson and Murry-Webster framework [6].
The Version A E-DOSPERT questions were phrased to mea-
sure risk averse and risk seeking attitudes along the Likert scale
described above. 25 questions were intentionally phrased in-
versely. For example, the authors asked respondents’ attitudes
towards technology use. The risk averse version asked respon-
dents to rate their likelihood of “using a technology with a lower
failure rate than another one but at expense of functionality. The
risk seeking version asked respondents about their likelihood of
“using a technology that has a higher failure rate than a cur-
rent one but that has a better functionality. Thus, the sub-set
of inversely worded questions provides a consistency check. If
the respondents are consistent and the scales are unidimensional
(risk averse or risk seeking), then the coefficient alpha will be
sufficiently high. Further, if the scales are unidimensional, Hy-
pothesis 5 will be validated. A complete list of questions is pre-
sented in Appendix A.
The questions in the E-DOSPERT survey were developed
with the aim of being applicable to engineers regardless of na-
tional origin - that is, the questions relate to matters of engineer-
ing which would occur anywhere. Like the DOSPERT scale,
the authors aimed to create an instrument with eight-item sub-
scales. However, for Version A of the E-DOSPERT survey, the
authors constructed a larger set of sub-items (test questions), 25
risk averse, 29 risk seeking, and 54 questions in all. The num-
ber of items can be reduced in later versions, using questions
with high inter-item correlations within a domain, once there is a
better understanding of engineering risk attitude, the domains of
engineering risk, and how to measure engineering risk attitude.
This larger set also allows the authors to perform an exploratory
factor analysis to determine if factors other than the four from
the ISO 31000:2009 standard constitute domains of engineering
risk.
4.2 Initial Scale Implementation and Testing
The Version A E-DOSPERT test was administered to un-
dergraduate and graduate students at the University of Sydney
(USyd) and Oregon State University (OSU). Prior to full testing,
the survey was administered to several small groups of graduate
students, undergraduate students, and researchers in order to val-
idate the wording of the items. The survey contained two parts
consisting of the DOSPERT test and the Version A E-DOSPERT
test. The survey was administered on-line using SurveyMonkey.
At USyd, the participant population was comprised of un-
dergraduate and graduate students in the Bachelor of Engineer-
ing (Mechatronics) program. A total of 23 students participated
in the survey. They ranged in age from 18 to 34, averaging 20
years of age. Three women and 20 men responded to the survey.
The participant population at OSU consisted of both graduate
and undergraduate students in the School of Mechanical, Indus-
trial, and Manufacturing Engineering. A total of 87 students re-
sponded. They ranged in age from 20 to 35 with an average of
23. Eight women and 79 men responded. The total sample popu-
lation was comprised of 110 respondents completing the survey.
The administration of the survey and its content was approved by
the relevant review boards at USyd and OSU.
4.3 Descriptive Statistics
Table 1 shows the sub-scale means (M) and standard devi-
ations (SD) for the 110 respondents for the risk averse and risk
seeking dimensions. For risk averse, the mean level of risk is M
= 3.16 (SD = 0.48) and for risk seeking, the mean level of risk
is M = 2.84 (SD = 0.52). The Shapiro-Wilks test, recommended
for data sets with fewer than 2000 samples, was performed to
test the normality. At the alpha = 0.05 level, the p-value for the
risk averse dataset was 0.146 and the p-value for the risk seeking
data set is 0.774, which indicates that we can accept that the risk-
tolerant and risk-averse data sets are normally distributed. Based
on a one-tailed ANOVA across the sub-scales, the means are sig-
nificantly different (p<0.001), meaning that the risk attitudes are
domain-specific. The sub-scale means and standard deviations,
and one-tailed ANOVA test clearly indicate that engineering risk
attitude does exist and further that engineering risk attitudes do
not follow the expected value theorem. These data strongly sup-
ports Hypothesis 1.
Table 1. Risk Averse and Seeking Means and Standard Deviations
Sub-scale Risk Averse Mean (SD) Risk Seeking Mean (SD)
Identification 3.42 (0.32) 2.61 (0.12)
Analysis 2.96 (0.39) 2.78 (0.63)
Evaluation 2.25 (0.38) 3.30 (0.51)
Treatment 3.47 (0.31) 2.80 (0.49)
Since the risk-attitude scale ranges from “very unlikely” to
“very likely”, the higher the mean for risk averse, the more risk
averse the respondents are, and, conversely, the lower the mean
for risk seeking, the less risk seeking the respondents are. The
data shows that the population of respondents are quite unsure
about their risk attitude, that is, they are in the category of “risk
tolerant” according to Hillson and Murray-Webster’s scale [6].
They either believe that they can handle uncertainty when they
encounter it, or, given the undergraduate student status of respon-
dents, may not have yet developed the capacity to assess their
engineering risk attitude. The authors believe this is an indica-
tion that more attention should be paid to educating engineering
students on appropriate risk methods and practices.
Risk attitudes were compared between the OSU and USyd
students. No statistically significant difference was found (two-
tailed, independent samples t-test, Levene’s Test of Equality of
Variances satisfied for unbalanced population sizes), thus sup-
porting Hypothesis 4. Table 2 summarizes the mean and stan-
dard deviation of the OSU and USyd response groups for the
E-DOSPERT scale under risk seeking and risk aversion for all
domains and sub-scales. The results show that risk attitudes are
largely the same across the USyd and OSU respondents, except
for on the risk averse-risk treatment sub-scale, which in turn af-
fected the statistical difference between the USyd and OSU on
the risk averse scale because of the higher proportion of items on
the risk treatment sub-scale. This imbalance in items is a flaw in
Version A of the scale, which was addressed in Version B of the
E-DOSPERT test.
Table 2. Comparison of the USyd and OSU respondent populations
Subscale Uni Mean (SD)
Risk Seeking Identification Domain OSU 2.62 (0.984)
USyd 2.58 (0.930)
Risk Seeking Evaluation Domain OSU 3.30 (1.056)
USyd 3.29 (0.977)
Risk Seeking Analysis Domain OSU 2.77 (1.054)
USyd 2.85 (1.096)
Risk Seeking Treatment Domain OSU 2.81 (1.075)
USyd 2.79 (1.042)
Risk Seeking All Domains OSU 2.84 (1.069)
USyd 2.85 (1.048)
Risk Averse Identification Domain OSU 3.40 (1.043)
USyd 3.50 (0.925)
Risk Averse Analysis Domain OSU 3.12 (0.999)
USyd 3.25 (0.958)
Risk Averse Evaluation Domain OSU 3.40 (1.043)
USyd 3.50 (0.925)
Risk Averse Treatment Domain OSU 3.39** (1.036)
USyd 3.59** (0.848)
Risk Averse All Domains OSU 3.21** (1.051)
USyd 3.34** (0.962)
** p-value is <0.05
4.4 Initial Scale Results
Factor analysis is a statistical technique used to identify clus-
ters of variables. In this research, it was important to investigate
whether the items in the E-DOSPERT scale were measuring the
underlying variables proposed in the engineering risk domains
hypothesized. Several steps were taken in the exploratory factor
analysis of the data collected from Version A of E-DOSPERT
scale. First an exploratory factor analysis with oblique tar-
get rotation (oblimin) on the correlation matrix of the initial
E-DOSPERT scale items was performed. Items on both the risk
averse and risk seeking scales were removed where the anti-
image correlations were <0.50. The KMO measure of sampling
adequacy was sufficiently high (>0.70) and Bartlett’s test of
sphericity was significant, so that a factor analysis could proceed.
Based on the number of hypothesized sub-scales, a four-factor
model was specified. A four-factor model explained 49.683% of
the variance in the Risk Seeking Category and 48.536% of the
variance in the Risk Averse Category. Due to space limitations,
and to make interpretation of the model simpler, only those items
that load onto only one factor in the models’ factor structure are
shown in Table 3 for the Risk Averse dimension and Table 4 for
the Risk Seeking dimension [48].
Table 3. Factor model structure for risk averse dimension
Component
1 2 3 4
Following standard operating pro-
cedures (replicated question)
0.902
Following standard operating pro-
cedures
0.880
Following maintenance strategies
according to manufacturer’s
0.752
Having complete data on proba-
bility of failure
0.625
Documenting all maintenance
procedures
0.540
Referring to authoritative source
to check technical matter
0.586
Miss deadline to complete experi-
mental testing
0.565
“Whistle-blowing” company’s
cover up of significant flaw
0.549
Operating machinery below limits 0.464
Not Upgrading Software 0.416
Investigating unlikely to occur de-
sign flaw
-0.735
No need for corroboration of ex-
perimental results
0.643
Using new equipment after volun-
tary formal training
0.808
Regular training on risk manage-
ment
0.764
Values in Table 3 and 4 show that four factors were identi-
fied in the data. The loadings are arranged from higher to lower
values in each factor. Substantive loadings are considered those
Table 4. Factor model structure for risk seeking dimension
Component
1 2 3 4
No formal review process 0.774
Ensuring staff awareness of only
of major risks
0.716
Conducting root cause analysis
only for major failures
0.639
Cut experimental testing to meet
deadline
0.523
Not calculating loss at the mini-
mum probability of failure
0.488
Emphasis on legal, regulatory, and
other requirements
0.332
Not recording the repairing of a
fault
0.750
Never conducting root cause anal-
ysis for failures
0.736
Not updating training on risk
management
0.646
Quiet about company’s cover up
of significant flaw
0.513
Not Documenting all maintenance
procedures
0.441
Technology with higher failure
but better functionality
-0.632
No full documentation -0.580
Not having complete data on
probability of failure
-0.579
Allowing minor flaws -0.561
Accepting colleague’s opinion on
a technical matter
-0.520
>0.40 when ignoring the minus sign. Although the analysis of
these tables suggest that questions in the proposed scale could be
composed by four sub-scales, the identified factors in the tables
do not match the engineering risk domains initially proposed.
Each separate factor contains items from all four of the hy-
pothesized content domains, suggesting that these four content
domains as proposed by ISO 31000:2009 are not underlying fac-
tors in risk behavior judgment. Despite this discrepancy, there is
some uniformity in the interpretation of the factor model struc-
ture. In the Risk Averse dimension, Factor 1 includes items about
following established processes and procedures including main-
tenance and standard operating procedures, Factor 2 relates to
professional ethics and conduct such as ’whistle-blowing’ and
relying on professional bodies to set standards for technical stan-
dards, Factor 3 relates to product testing and Factor 4 relates to
training. In the Risk Seeking dimension, Factor 1 includes items
on processes and procedures such as having a formal review pro-
cess and following best practice in root cause analysis, Factor
2 contains one item related to legal matters, Factor 3 relates to
professional ethics and conduct such as covering up a significant
flaw and not documenting repairs to faults and Factor 4 includes
items relating to product functionality and design. Thus the data
supports Hypothesis 2 in that four factors are present but rejects
Hypothesis 2 that the four factors hypothesize are the correct four
factors (four domains of engineering risk).
Table 5 summarizes the values of Cronbach’s alpha for Ver-
sion A of the E-DOSPERT test. The reliability values are shown
for the Risk Averse and Risk Seeking Categories and are suffi-
ciently high (>0.70) given the test length [49]. Table 5 strongly
supports the hypothesis that risk tolerant and risk averse behavior
is present in engineering risk attitude in a unidimensional scale
(Hypothesis 5), and further supports the hypothesis that engineer-
ing risk attitude does not follow the expected value theorem (Hy-
pothesis 5).
Table 5. Reliability Statistics
E-DOSPERT Cronbach’s Alpha No of Items
Risk Averse 0.758 25
Risk Seeking 0.813 29
Table 6 summarizes the values of coefficient alpha and num-
ber of items for the initial E-DOSPERT scale under each of the
originally proposed content domains. The values are shown for
the Risk Averse and Risk Seeking dimensions on Version A of
the E-DOSPERT scale. Only the risk treatment and risk iden-
tification sub-scales have a sufficiently high reliability, although
the reliability for assessing risk treatment along the risk seeking
scale is below the generally accepted threshold (>0.70). This
presents further evidence that Hypothesis 2 should be rejected.
Respondents were consistent in answering replicated questions
with nearly 100 % answering the questions in the same way.
Table 6. Reliability Statistics
Risk Averse Risk Seeking
E-DOSPERT Cronbach’s
Alpha
N of Items Cronbach’s
Alpha
N of Items
Identification 0.731 4 0.796 6
Analysis 0.289 8 0.469 9
Evaluation -0.384 3 0.257 5
Treatment 0.726 10 0.614 9
Thus it can be concluded that the four factors originally pro-
posed by the ISO 31000:2009 document are not the underlying
domains of engineering risk.
4.5 Discussion
The results support the hypothesis that engineering risk at-
titude is domain-specific (Hypotheses 1, 2, and 3). The authors
were able to obtain suitable reliability for at least two of the sub-
scales, namely, risk identification and risk treatment, but not for
risk analysis and evaluation. In the factor analysis, items had
moderate to high loadings on their specified factors, and these
factors were not highly correlated, which supports the idea that
risk attitudes are multi-faceted and cannot be captured by a single
index. This is evidence against Hypothesis 2 although analysis
did show that four other potential domains of engineering risk
exist.
The reliability values for the risk analysis and risk evaluation
sub-scales were particularly low. This means that the respon-
dents were not able to discriminate between situations that dealt
with the analysis of a risk, which concerns understanding the
nature and the degree of the risk through actions such as gather-
ing empirical data, identifying sources of risk, running numerical
simulations, and estimating likelihoods of occurrence, and ques-
tions dealing with the evaluation of risk, which entails review-
ing data from the risk analysis. Given that the means and stan-
dard deviations for overall risk aversion and risk seeking were
very close to 3, meaning “not sure”, and that the population of
respondents were undergraduate students who were unfamiliar
with risk management, the authors speculate that the reliability
values may improve if a population of engineering professionals
familiar with engineering risk management were surveyed. That
the students were “not sure” of their risk attitude suggests that
this is an engineering awareness that should be developed.
Nonetheless, the reliability analysis allows the following
conclusion about Version A of the E-DOSPERT scale:
1. The scale is suitable to measure engineering risk aversion and
risk seeking.
2. The scale is suitable to measure engineering risk aversion and
risk seeking along the subscales of risk identification and risk
treatment.
3. The scale is not suitable to measure engineering risk aversion
and risk seeking along the subscales of risk analysis and risk
evaluation.
Given that the exploratory factor analysis did not show the
items loading onto their respective factors, an alternative inter-
pretation of the factors is made. An interpretation of the item-
loadings in the exploratory factor analysis suggests a different
set of factors, which could provide better coverage of risk-taking
situations encountered by engineers.
1. Engineering practice and processes: Situations associated
with project processes and the work of engineering.
2. Product functionality: Situations associated with the objec-
tives, requirements, performance, or failure of the engineered
product [50].
3. Legal: Situations associated with legal and regulatory re-
quirements in engineering and of engineers.
4. Engineering ethics: Situations associated with professional
and ethical conduct.
These factors correspond to domains of engineering risk
identified by other researchers. The factors associated with engi-
neering processes and product functionality have been identified
by Eckert [51] as generic risk factors based on their study of de-
sign processes across disciplines. The engineering ethics factor
has a correlation to the general risk domain of social risk [7] and
are suggestive of the generic engineering risk to the engineer’s
reputation [51].
5 Revising the E-DOSPERT Test
Based upon the results of the initial E-DOSPERT scale, the
authors revised, refined, and expanded the E-DOSPERT test to
identify the correct domains of engineering risk, the second re-
search question. We hypothesize six domains of engineering risk,
which includes the four domains identified from Version A of the
E-DOSPERT data and two additional potential domains identi-
fied from other research [50]. The six predicted domains of engi-
neering risk include: engineering practice and processes, product
functionality, legal, engineering ethics, product testing, and train-
ing. This section documents the development of Version B of the
E-DOSPERT test, its administration, analysis, and a discussion
of the results.
5.1 Revised E-DOSPERT Test Development
Questions were developed for each of the six domains based
upon professional engineering-related situations involving risk
that practicing engineers commonly encounter. The engineering
practices and processes portion of the scale is comprised of ques-
tions related to situations associated with project processes and
the work of engineering. The authors consulted project manage-
ment texts and professional engineering references when gener-
ating the questions. Sample questions include:
“not fully complying with company procedures in order to
meet a project deadline”
“having incomplete historical data on the performance of a
component”
The engineering ethics portion of the scale focuses on situ-
ations associated with professional and ethical engineering con-
duct. Classical engineering ethics case studies were reviewed
for inspiration in developing the questions. Sample questions in-
clude:
“taking credit for work done by a colleague”
“copying design work done for one client for another client”
The testing portion of the scale focuses on product testing.
The authors drew upon their backgrounds in product testing and
upon relevant texts to develop questions that examine the thor-
oughness and completeness of testing plans. Specific attention
was paid to several areas including verifying calculated data with
testing. Sample questions include:
“not corroborating computational simulations with experi-
mental results”
“take reported product malfunctions at face value”
The training portion of the scale was developed to examine
how engineers are trained and how engineers train others. At-
tention was paid to new equipment, and upgraded equipment,
and the need for additional training or in-depth training. Sample
questions include:
“not providing training for upgraded machines”
“not attending continuing education courses to learn new
skills”
The legal portion of the scale focused on situations associ-
ated with regulatory and legal requirements in the engineering
profession and of professional engineers. Sample questions in-
clude:
“you are flexible about complying with engineering regula-
tions”
“not maintaining full written records of all product testing
for compliance with relevant product regulations”
The product functionality and design portion of the scale
was based upon situations associated with the objectives, require-
ments, performance, or failure of engineered products [50]. Sam-
ple questions include:
“sub-contracting critical design work to a third party”
“using an unknown component to perform a critical function
because it less expensive than a known suitable component”
In Version B of the E-DOSPERT test, a seven point Lik-
ert scale was used, as in the revised version of the DOSPERT
test [8]. The scale ranged from 1 corresponding to “very un-
likely” to 4 corresponding to “not sure” to 7 corresponding to
“extremely likely.” The full test is provided in Appendix B, with
the questions ordered randomly.
Version B questions of the E-DOSPERT were all phrased
in a manner similar to Version A. The authors intentionally did
not include the consistency check of inversely worded ques-
tions that were present in Version A of the E-DOSPERT be-
cause the analysis of Version A of the E-DOSPERT scale al-
ready supported Hypothesis 5. Based upon the results of Version
A of the E-DOSPERT scale and similar research [8], inversely
worded questions are not needed to show that the factors are bi-
dimensional (risk tolerant to risk averse). The questions were
developed with the goal of being national origin independent. In
other words, the questions relate to engineering matters that are
expected to occur anywhere. A total of 65 questions were tested.
The number of items can be reduced in future versions of the
E-DOSPERT test. This large set of questions and resulting data
allows exploratory factor analysis to be performed to determine
if the six proposed factors are present or if other factors underlie
risk behavior judgments.
5.2 Revised Scale Implementation and Testing
Version B of the E-DOSPERT test was administered to un-
dergraduates and graduate students at OSU. The survey was ad-
ministered using SurveyMonkey. Prior to full testing, the sur-
vey was administered to several small groups of students and re-
searchers in order to validate and refine the questions.
The participant population was comprised of graduate and
undergraduate students enrolled in courses or associated with in
the School of Mechanical, Industrial, and Manufacturing Engi-
neering. In total, 206 students responded. The age range was
from 19 to 43, averaging 22 years old. A total of 22 women
and 184 men responded to the survey. The administration of the
survey and its contents were approved by the OSU Institutional
Review Board.
5.3 Revised Scale Results
Factor analysis was performed on the resulting data in a
manner similar to Section 4.4. An exploratory factor analy-
sis with oblique target rotation (oblimin) and Maximum Likeli-
hood Extraction (MLE) [52] was performed. The Kaiser-Meyer-
Olkin (KMO) was sufficiently high (0.795) and Bartlett’s test
of sphericity was significant (<0.05) allowing factor analysis to
proceed. Based upon the number of hypothesized sub-scales, a
six factor model was specified. A six factor model explained
43.696% of the variance in the scale. Several iterations of purg-
ing items that loaded poorly or onto multiple factors and verify-
ing item communalities were performed. However, the analysis
ran into ultra-Heywood cases. This led the authors to reexam-
ine the supposition of a six factor model. The scree plot indi-
cated that a five factor model might also be present. A five factor
model explained 40.617% of the variance in the scale. Several it-
erations of removing poorly loaded items and verifying commu-
nalities was performed. The resulting scale has a KMO of 0.806
and Bartlett’s test of sphericity was significant. The goodness-
of-fit test was not significant indicating that the model is a good
match to the data. Table 8 provides additional statistics for the
full scale. Table 7 presents the five factors that were identified
and the associated values. Table 9 shows the reliability of each
factor that was identified.
Table 7. Factor model structure for revised E-DOSPERT
Component
1 2 3 4 5
Not documenting every single step that was taken to design a new component (PnP) .740
Not fully complying with company procedures in order to meet a project deadline (PnP) .667
Having incomplete historical data on the performance of a component (PnP) .626
Not having complete data on the probability of failure for each component in a system (PFnD) .560
Copying design work done for one client for another client (E) .445
Exaggerating your company’s competencies in order to win a contract (E) .813
Accepting a weekend holiday(vacation) from potential contractors (E) .612
Use consumable work resources for home projects (E) .543
Reverse engineer a competitor’s technology with the intent to bring to market a nearly identical
copy (E)
.470
Protect your client’s confidentiality by not reporting to a regulatory agency a negligent behavior
by the client (E)
.433
Not giving much consideration about whether the product can be recycled or disposed of in a
safe, secure and environmentally sound manner (E)
.428
Not attending compulsory formal training for new machines (T) -.778
Not providing training for upgraded machines (T) -.691
Not following standard operating procedures systematically (PnP) -.497
Not investigating a suspected design flaw because you don’t think it is likely to happen (PT) .620
Using an unknown component to perform a critical function because it less expensive than a
known suitable component (PFnD)
.520
Relying upon the risk management practices you learned at university rather than regular con-
tinuing education on new risk management techniques (T)
.513
Going into detailed design with the first design concept you came up with (PFnD) .441
Verifying that your product is in compliance with all applicable environmental, health, and
safety laws and regulations (L)
.590
Glance at the operating procedures for a new product prior to use (T) .566
Placing higher emphasis on legal, regulatory, and other requirements over operating profitability
(L)
.436
Note: (E) = Ethics, (PT) = Product Testing, (PFnD) = Product Functionality and Design, (L) = Legal, (PnP) = Processes and Procedures,
(T) = Training. This represents the proposed six factors of engineering risk attitude.
Table 8. Scale Statistics
Mean Variance Std. Dev. N Cronbach’s Alpha
73.02 188.074 13.714 21 0.800
Table 9. Factor Reliability
Factor Cronbach’s Alpha N of Items
Factor 1 0.759 4
Factor 2 0.750 6
Factor 3 0.699 3
Factor 4 0.638 4
Factor 5 0.521 3
5.4 Discussion
The results of Version B of the E-DOSPERT analysis show
strong evidence of a five factor scale. The authors were able to
obtain suitable reliability for Factors 1 and 2 where Cronbach’s
alpha was significant (>0.70) [41] and marginal reliability for
Factors 3 and 4 (>0.60 ) [40]. The reliability of Factor 5 is low,
but there is evidence that a fifth factor exists. Based upon an in-
terpretation of the items loading onto the five factors, the authors
propose the following set of factors, or domains of engineering
risk:
1. Processes, Procedures, and Practices: all five of the questions
relate to the best processes, procedures, and practices that an
engineer should follow in their professional lives.
2. Engineering Ethics: All six questions are based upon ethical
dilemmas encountered by practicing engineers.
3. Training: The three questions that loaded onto Factor 3 re-
late to conducting training and following guidance given by
training.
4. Product Functionality and Design: The four questions that
loaded onto Factor 4 relate to the functionality and design of
products.
5. Legal Issues: Two of the three questions that load onto Factor
5 relate to legal issues.
Additional analysis was conducted to verify that higher
numbers of factors were not present. No additional interpretable
factors appear in the data. While the pool of participants was
lower than desired (N=206) and the reliability of several fac-
tors was lower than ideal, the evidence points toward a five fac-
tor model of domains of engineering risk. These five domains
are consistent with factors predicted by the authors by other re-
searchers. Further replication of the test should be performed
with other sample populations to confirm and further strengthen
these findings. A combined sample population in excess of
N>400 is desireable [7].
6 E-DOSPERT Applications
The E-DOSPERT test in its current form and in future revi-
sions is useful to the practitioner and researcher for several rea-
sons. First, administering an E-DOSPERT test to an engineer
can provide valuable insight into how that engineer will behave
in engineering risk situations. This allows for targeted training to
be given to the engineer in order to correct for any differences in
engineering risk attitude from what the position requires.
Second, the E-DOSPERT could be used as part of a hir-
ing process. Already many companies administer personality
type tests such as the Meyers-Briggs Type Indicator (MBTI) [53]
and others. With a proper understanding of the results of an
E-DOSPERT test, hiring managers can be expected to make
more informed choices on hiring engineers.
Third, stakeholder risk preference can be collected using the
E-DOSPERT. Rather than requiring stakeholders to be present to
provide input on their engineering risk attitude, design engineers
can refer to the stakeholders’ E-DOSPERT scores. This can be
expected to save time and produce results more in line with what
the stakeholders intrinsically desire.
To enable use in practice, a method can feasibly be de-
veloped based upon the E-DOSPERT survey to translate expert
opinions from individual scales to a normative scale. In other
words, judging the risk of a product failure on a scale of 1 to 10
might elicit a response of 7 from one expert and a response of
5 from another. Those two different numbers might simply be
the result of different internal scales. Normalizing those expert
opinions using the E-DOSPERT might result in the discovery
that both experts mean the same thing.
Another area that is already being actively developed is us-
ing E-DOSPERT test results to generate utility risk functions.
These utility risk functions can then be used to analyze early
conceptual system design trade studies that contain risk as a
tradeable parameter. Decision aids and decision automation
can also take place using utility risk functions generated from
E-DOSPERT results as shown in prior work [54].
7 Conclusion
This paper presented the development of a psychometric en-
gineering risk-attitude test, the E-DOSPERT scale, to measure
the risk aversion and risk seeking attitudes of engineers. Ver-
sion A of the E-DOSPERT scale was first presented to test the
validity of the ISO 31000:2009 standard and its recommended
four content domains for risk management as the basis for risk
behavior judgment. Two of the domains, analysis and evalua-
tion, were found to be not easily discriminated, at least in a pop-
ulation of engineering undergraduates. Based on an exploratory
factor analysis with oblique target rotation, the authors suggested
four other factors that may underlie the risk behavior judgments.
Based upon further insight into the data, an additional two po-
tential domains were added. Version B of the E-DOSPERT scale
was then developed and tested.
Items in the revised E-DOSPERT scale are based on com-
monly encountered engineering risk scenarios and scenarios
based in risk management. The results show that the scale is suit-
ably reliable to measure engineering risk attitude in two domains
including processes, procedures, and practices (five items); and
engineering ethics (six items). The scale is marginally suitable
to measure engineering risk attitude in two additional domains
including training (three items), and product functionality and
design (four items). A fifth domain, legal issues (two items), ap-
pears to be present but is not statistically reliable. The Version B
E-DOSPERT scale is suitably reliable to measure general engi-
neering risk aversion and risk tolerance.
In practice, the Version B E-DOSPERT scale can be used to
assess engineering risk attitudes toward processes, procedures,
and practices and engineering ethics; at the option of the prac-
titioner, two additional domains including training, and product
functionality and design may be assessed. Product functional-
ity and design items could be reworded to relate to the industry
context of the surveyed individuals, which may result in more re-
liable results. The authors suggest that users of the scale remove
or refine items on legal issues domain in Version B of the scale.
In future work, the authors will revise wordings on items in the
training, product functionality and design, and legal sub-scales.
A goal of six highly weighted items per sub-scale is targeted to
ready the E-DOSPERT scale for unrestricted use by practitioners.
Additional testing of the survey will be performed over larger
sample populations to gain further statistical validity. Tests at
multiple universities and in multiple countries will be performed.
An examination of the role that educational level and engineer-
ing professional experience play will be examined in forthcom-
ing research. A survey of engineers in different industries will
be conducted in order to understand variation between industries
and sub-disciplines. After further vetting, the E-DOSPERT will
be made available in multiple languages. Once these further steps
are taken, such an instrument can then be used as a standard to as-
sess domain-specific engineering risk attitude across industries,
within organizations, by gender and national origin, and as pre
and post tests on the development of risk-assessment as an engi-
neering attribute in engineering education. The authors believe
that such information is crucial in interpreting how individual en-
gineers approach design and design decision-making in different
domains of engineering risk.
Achnowledgements
This research was supported in part under Australian Re-
search Council’s Discovery Projects funding scheme (project
number DP1095601) and in part by the National Science Foun-
dation (project number CMMI 1030060). The study protocol was
reviewed and approved by the Institutional Review Board at OSU
and the Human Research Ethics Committee at USyd. The opin-
ions, findings, conclusions and recommendations expressed are
those of the authors and do not necessarily reflect the views of
the sponsors.
References
[1] Van Bossuyt, D. L., Wall, S. D., and Tumer, I., 2010. “To-
wards risk as a tradeable parameters in complex systems
design trades”. In ASME 2010 International Design En-
gineering Technical Conferences and Computers and In-
formation in Engineering Conference (IDETC/CIE2010),
Vol. 3: 30th Computers and Information in Engineering
Conference, Parts A and B, ASME, pp. 1271–1286.
[2] Tumer, I. Y., and Stone, R. B., 2003. “Mapping function to
failure mode during component development”. Research in
Engineering Design, 14(1), pp. 25–33.
[3] Standards Australia, and Standards New Zealand, 2009.
AS/NZS ISO 31000:2009 Risk Management - Principles
and Guidelines.
[4] Oehmen, J., Ben-Daya, M., Seering, W., and Al-Salamah,
M., 2010. “Risk management in product design: Current
state, conceptual model and future research”. In ASME
2010 International Design Engineering Technical Confer-
ences and Computers and Information in Engineering Con-
ference (IDETC/CIE2010), Vol. 1: 36th Design Automa-
tion Conference, Parts A and B, ASME, pp. 1033–1041.
[5] Mehr, A. F., and Tumer, I. Y., 2006. “Risk-based decision-
making for managing resources during the design of com-
plex space exploration systems”. Journal of Mechanical
Design, 128, pp. 1014–1022.
[6] Hillson, D., and Murray-Webster, R., 2007. Understanding
and managing risk attitude. Aldershot, Gower.
[7] Weber, E. U., Blais, A.-R., and Betz, N. E., 2002. “A
domain-specific risk-attitude scale: Measuring risk percep-
tions and risk behaviors”. Journal of Behavioral Decision
Making, 15(4), pp. 263–290.
[8] Blais, A.-R., and Weber, E. U., 2006. A domain-specific
risk-taking (DOSPERT) scale for adult populations”. Judg-
ment and Decision Making, 1(1), pp. 33–47.
[9] Van Bossuyt, D. L., Carvalho, L., Dong, A., and Tumer,
I. Y., 2011. “On measuring engineering risk attitudes”.
In ASME 2011 International Design Engineering Techni-
cal Conferences and Computers and Information in Engi-
neering Conference (IDETC/CIE2011), Vol. 9: 23rd Inter-
national Conference on Design Theory and Methodology;
16th Design for Manufacturing and the Life Cycle Confer-
ence, ASME, pp. 425–434.
[10] Pratt, J. W., 1964. “Risk aversion in the small and in the
large”. Econometrica, 32(1-2), pp. 122–136.
[11] Arrow, K., 1971. Essays in the Theory of Risk Bearing.
Markham, Chicago.
[12] Keeney, R. L., and Raiffa, H., 1993. Decisions with Mul-
tiple Objectives: Preferences and Value Tradeoffs. Cam-
bridge University Press, Cambridge.
[13] Bernoulli, D., 1954. “Exposition of a new theory on the
measurement of risk”. Econometrica, 22(1), pp. 23–36.
[14] Kahneman, D., and Tversky, A., 1979. “Prospect theory:
An analysis of decision under risk”. Econometrica, 47(2),
Mar., pp. 263–291.
[15] von Winterfeldt, D., and Edwards, W., 1986. Decision
Analysis and Behavioral Research. Cambridge University
Press, Cambridge.
[16] Slovic, P., 1964. “Assessment of risk taking behavior”. Psy-
chological Bulletin, 61(3), pp. 330–333.
[17] Schoemaker, P. J. H., 1990. Are risk-preferences related
across payoff domains and response modes?”. Management
Science, 36(12), pp. 1451–1463.
[18] MacCrimmon, K., and Wehrung, D. A., 1990. “Charac-
teristics of risk taking executives”. Management Science,
36(4), pp. 422–435.
[19] Dyer, J. S., and Sarin, R. K., 1982. “Relative risk aversion”.
Management Science, 28(8), pp. 875–886.
[20] Weber, E. U., 1997. The Utility of Measuring and Modeling
Perceived Risk. Erlbaum, Mahwah, NJ, pp. 45–57.
[21] Bromiley, P., and Curley, S., 1992. Individual Differences
in Risk Taking. John Wiley & Sons, Oxford, pp. 87–132.
[22] Sarin, R. K., and Weber, M., 1993. “Risk-value models”.
European Journal of Operations Research, 70, pp. 135–
149.
[23] Weber, E. U., 1998. Who’s Afraid of a Little Risk? New Ev-
idence for General Risk Aversion. Kluwer Academic Press,
Norwell, MA, pp. 53–64.
[24] Yates, J. F., and Stone, E. R., 1992. The Risk Construct.
John Wiley & Sons, Oxford, pp. 1–25.
[25] Coombs, C. H., 1975. Portfolio Theory and the Measure-
ment of Risk. Academic Press, New York, pp. 63–68.
[26] Cooper, A. C., Woo, C. Y., and Dunkelberg, W. C., 1988.
“Entrepreneurs’ perceived chances for success”. Journal of
Business Venturing, 3, pp. 97–108.
[27] Bontempo, R. N., Bottom, W. P., and Weber, E. U., 1997.
“Cross-cultural difference in risk perception: A model-
based approach”. Risk Analysis, 17, pp. 479–488.
[28] Slovic, P., 1997. Trust, Emotion, Sex, Politics, and Sci-
ence: Surveying the Risk-Assessment Battlefield. Jossey-
Bass, San Francisco, pp. 277–313.
[29] Schwartz, A., and Hasnain, M., 2001. “Risk perception and
risk attitude in informed concent”. Working Paper, Depart-
ment of Medical Education, University of Illinois Chicago.
[30] Slovic, P., Fischhoff, B., and Lichtenstein, S., 1986. The
Psychometric Study of Risk Perception. Plenum Press, New
York, pp. 3–24.
[31] March, J. G., and Shapira, Z., 1987. “Managerial per-
spectives on risk and risk taking”. Management Science,
33(11), pp. 1404–1418.
[32] Weber, E. U., 2001. Personality and Risk Taking, and Deci-
sion and Choice: Risk, Empirical Studies. Elsevier Science
Limited, Oxford, pp. 11274–11276.
[33] Bunder, S., 1962. “Intolerance of ambiguity as a personality
variable”. Journal of Personality, 30, pp. 29–50.
[34] Zuckerman, M., 1994. Behavoiral Expressons and Bioso-
cial Bases of Sensation Seeking. Cambridge University
Press, New York.
[35] Paulhus, D. L., 1984. “2-component models of socially
desirable responding”. Journal of Personality and Social
Psychology, 46, pp. 598–609.
[36] Johnson, J. G., Wilke, A., and Weber, E. U., 2004. “Beyond
a trait view of risk-taking: A domain-specific scale measur-
ing risk perceptions, expected benefits, and perceived-risk
attitude in german-speaking populations”. Polish Psycho-
logical Bulletin, 35(3), pp. 153–163.
[37] Blais, A.-R., and Weber, E. U., 2006. “Testing invariance in
risk taking: A comparison between Anglophone and Fran-
cophone groups”. S´
erie Scientifique, 2006s-25.
[38] Martin, J. D., and Simpson, T. W., 2006. A methodology
to manage system-level uncertainty during conceptual de-
sign”. ASME Journal of Mechanical Design, 128, pp. 959–
968.
[39] Thunnissen, D. P., 2003. “Uncertainty classification for the
design and development of complex systems”. In 3rd An-
nual Predictive Methods Conference.
[40] DeVellis, R. F., 2003. Scale Development Theory and Ap-
plications, 2nd ed. SAGE Publications, Newbury Park.
[41] Nunnally, J. C., and Bernstein, I. H., 1994. Psychometric
Theory, 3rd ed. McGraw-Hill, New York.
[42] Cronbach, L. J., 1951. “Coefficent alpha and the internal
structure of tests”. Psychometrika, 16(3), pp. 297–334.
[43] Likert, R., 1932. “A technique for the measurement of atti-
tudes”. Archives of Psychology, 140, pp. 1–55.
[44] Raaijmakers, Q. A. W., van Hoof, A., ’t Hart, H., Verbogt,
T. F. M. A., and Vollebergh, W. A. M., 2000. “Adolescents’
midpoint responses on likert-type scale items: Neutral or
missing values?”. International Journal of Public Opinion
Research, 12(2), pp. 208–216.
[45] Francis, J. D., and Busch, L., 1975. “What we now know
about “I don’t knows””. Public Opinion Quarterly, 39,
pp. 207–218.
[46] Presser, S., and Schuman, H., 1980. “The measurement
of a middle position in attitude surveys”. Public Opinion
Quarterly, 44(1), pp. 70–85.
[47] Ayidiya, S. A., and McClendon, M. J., 1990. “Response
effects in mail surveys”. Public Opinion Quarterly, 54(2),
pp. 229–247.
[48] Field, A., 2009. Discovering Statistics Using SPSS, 3rd ed.
Sage, London.
[49] Schmitt, N., 1996. “Uses and abuses of coefficient alpha”.
Psychological Assessment, 8(4), pp. 350–353.
[50] Lough, K., Van Wie, M., Stone, R., and Tumer, I., 2009.
“Promoting risk communication in early design through
linguistic analyses”. Research in Engineering Design,
20(1), pp. 29–40.
[51] Eckert, C., Earl, C., Stacey, M., Bucciarelli, L. L., and
Clarkson, P. J., 2005. “Risk across design domains”.
In 15th International Conference on Engineering Design
(ICED05), The Design Society.
[52] Costello, A. B., and Osborne, J. W., 2005. “Best practices
in exploratory factor analysis: Four recommendations for
getting the most from your analysis”. Practical Assessment,
Research and Evaluation, 10(7), pp. 173–178.
[53] Myers, I. B., McCaulley, M. H., Quenk, N. L., and Ham-
mer, A. L., 1998. MBTI Manual. Consulting Psychologists
Press, Palo Alto.
[54] Van Bossuyt, D., Hoyle, C., Tumer, I. Y., and Dong, A.,
2012. “Considering risk attitude using utility theory in
risk-based design”. Artificial Intelligence for Engineering
Design, Analysis and Manufacturing (AIEDAM), 26(4),
pp. 393–406.
Appendix A: Version A of the E-DOSPERT Scale
Version A of the E-DOSPERT test presented in Appendix A was administered online using Survey Monkey. The questions were
automatically randomized when presented to the respondents. Below, the questions are presented in alphabetical order.
For each of the following statements please indicate the likelihood of engaging in each activity. Please provide a rating using
the following scale:
Very Unlikely Unlikely Not Sure Likely Very Likely
1 2 3 4 5
1. “Whistle-blowing” your company’s cover up of a significant design flaw. (T)
2. Accepting the results of computational simulation and analysis without experimental corroboration of results. (A)
3. Accepting your colleagues’ opinion about a technical matter without checking the originating source. (A)
4. Adjusting standard operating procedures to handle a design flaw to better fix the flaw. (T)
5. Allowing minor flaws through on a production line to keep the line moving. (T)
6. Applying a new process recommended in a prestigious journal even if it is not an industry-wide standard. (A)
7. Calculating potential loss from a design fault at the minimum probability of failure. (A)
8. Conducting a root cause analysis every time that a failure occurs. (A)
9. Conducting a root cause analysis of major failures but not of minor failures. (A)
10. Conducting maintenance according to what you think is best rather than following manufacturer recommended maintenance
strategies. (T)
11. Continuing to use an outdated but robust piece of software even if others in your group choose to upgrade to a new version. (A)
12. Cut back on experimental testing to meet a project deadline. (A)
13. Ensuring that all staff know about potential risks no matter how minor. (I)
14. Following maintenance strategies exactly according to manufacturer specifications. (T)
15. Following standard operating procedures word-for-word for the handling of any design flaw. (T)
16. Formally documenting all maintenance procedures. (T)
17. Fully documenting every design change, no matter how minor. (I)
18. Further investigating a design you suspect has a flaw that you estimate is not likely to occur. (I)
19. Halting a production line immediately if any flaw, no matter how minor, is identified. (T)
20. Having complete data on the probability of failure for each component in a system. (I)
21. Having formal review processes to review and analyse the history of design faults. (A)
22. Having no formal review process to analyse and review the history of design faults. (A)
23. Ignoring a colleague’s suggestion to investigate a major but unlikely design flaw. (A)
24. Informing staff only about potential major risks but not about minor risks. (I)
25. Introducing a design change (i.e., a new type of screw) without full documentation because you think it’s a minor change. (I)
26. Making a design change if a component’s failure rate is close to but below the industry standard for component failure. (T)
27. Miss a project deadline to conduct complete experimental testing. (A)
28. Never conducting root cause analysis for failures. (A)
29. Not bothering to calculate potential loss from a design fault at the minimum probability of failure. (A)
30. Not documenting all maintenance procedures. (T)
31. Not having complete data on the probability of failure for each component in a system. (I)
32. Not making a design change if its failure rate is close to but below the industry standard for component failure. (T)
33. Not trusting informed estimations of probabilities in a structured decision making process. (A)
34. Operating machinery at the limits of safety and availability. (T)
35. Operating machinery well below capacity and far within the limits of safety. (T)
36. Placing more emphasis on legal, regulatory, and other requirements over operating profitability. (E)
37. Placing more weight on a major fault that may never occur than a major fault that occurs often. (E)
38. Placing more weight on a major fault that occurs on a regular basis than one that may never occur. (E)
39. Recording a major fault but not a minor fault. (I)
40. Referring to an authoritative source to check your colleagues’ opinion about a technical matter. (A)
41. Relying on experience over formal processes when vetting decisions. (E)
42. Repairing a fault but not recording the number times you have needed to fix the fault. (I)
43. Staying quiet about your company’s cover up of a significant design flaw. (T)
44. Trusting experimental results even when they do not align with analytical calculations. (E)
45. Trusting informed estimation of probabilities in a structured decision making process. (A)
46. Upgrading your design analysis software as soon as a new version is available even if it is not used by others in your group. (A)
47. Using a new piece of equipment without optional formal training. (T)
48. Using a technology that has a higher failure rate than a current one but that has better functionality. (E)
49. Using a technology with a lower failure rate than another one but at the expense of functionality. (E)
50. Using an industry-wide standard rather than a new process recommended in a prestigious journal. (A)
51. Using risk management practices that were industry best practices when you learned them but not keeping up-to-date with current
practices. (A, E, T, I)
52. Voluntarily attending formal training before using a new piece of equipment. (T)
53. Voluntarily taking formal training on a regular basis on industry best practices in risk management. (I)
54. Using risk management practices that were industry best practices when you learned them but not keeping up-to-date with current
practices. (I)
Note: (A) = Risk Analysis, (T) = Risk Treatment, (E) = Risk Evaluation, (I) = Risk Identification
Appendix B: Version B of the E-DOSPERT Scale
Version B of the E-DOSPERT test presented in Appendix B was administered online using Survey Monkey. The questions were
automatically randomized when presented to the respondents. Below, the questions are presented in alphabetical order.
For each of the following statements please indicate the likelihood of engaging in each activity. Please provide a rating using
the following scale:
Very Unlikely Moderately Unlikely Somewhat Unlikely Not Sure Somewhat Likely Moderately Likely Very Likely
1 2 3 4 5 6 7
1. Accepting a weekend holiday(vacation) from potential contractors (E)
2. Adding many extra features to a product beyond original specifications (PFnD)
3. Assuming unfavorable test results from an early production prototype will improve after the next prototype is constructed (PT)
4. Certify a document as a qualified, professional engineer that is outside of your area of expertise (E)
5. Competent professional engineers need not be registered with a professional body that regulates appropriate professional practice
(L)
6. Comply with your supervisor’s instruction to withhold information from a client (E)
7. Consult the professional engineering code of conduct regularly (L)
8. Contracting product testing to a specialist outside firm (PT)
9. Copying design work done for one client for another client (E)
10. Designing a product in a manner that emphasizes profitability over protecting the environment, and the health, safety and security
of end-users (E)
11. Developing only general but not detailed operation guidelines for a piece of equipment (PnP)
12. Disregarding the company Standard Operating Procedures on design processes when starting a new design (PnP)
13. Exaggerating your company’s competencies in order to win a contract (E)
14. Glance at the operating procedures for a new product prior to use (T)
15. Going into detailed design with the first design concept you came up with (PFnD)
16. Having incomplete historical data on the performance of a component (PnP)
17. Including a component in a product for which there is only one supplier (PFnD)
18. Investigating product failures only when you think it is important (PT)
19. Leave it up to your customers to decide if they want to receive training on the safe operation of your product (T)
20. Let your workgroup discover new industry standards on their own (T)
21. Making decisions based on personal experience and intuition rather than evidence (PnP)
22. Not actively seeking information about the patent law in countries where you are operating (L)
23. Not assessing failure risk for incremental changes to a product (PFnD)
24. Not attend continuing education courses to learn new skills (T)
25. Not attending compulsory formal training for new machines (T)
26. Not consult legal counsel on how to proceed if accused of improper conduct related to an engineering matter (L)
27. Not corroborating computational simulations with experimental results (PT)
28. Not documenting every single step that was taken to design a new component (PnP)
29. Not following standard operating procedures systematically (PnP)
30. Not following the exact manufacturer-recommended maintenance strategies (PnP)
31. Not formally benchmarking your product against competing products (PFnD)
32. Not fully complying with company procedures in order to meet a project deadline (PnP)
33. Not fully understanding the limitations of “canned” calculations prior to using them (PT)
34. Not giving much consideration about whether the product can be recycled or disposed of in a safe, secure and environmentally
sound manner (E)
35. Not having an independent person or department audit quality assurance programs (PnP)
36. Not having complete data on the probability of failure for each component in a system (PFnD)
37. Not investigating a suspected design flaw because you don’t think it is likely to happen (PT)
38. Not maintaining full written records of all product testing for compliance with relevant product regulations (L)
39. Not providing training for upgraded machines (T)
40. Not testing a product for functionality beyond its intended purposes (eg: using a hammer handle as a lever) (PT)
41. Offer no follow-up, refresher training on how to operate equipment (T)
42. Placing higher emphasis on legal, regulatory, and other requirements over operating profitability (L)
43. Protect your client’s confidentiality by not reporting to a regulatory agency a negligent behavior by the client (E)
44. Rely only upon the manual of a new product that your company is deploying to learn safe operating procedures (T)
45. Relying on unwritten knowledge rather than documenting minor changes to procedures (PnP)
46. Relying upon computer simulation models to predict product failure modes without confirming by empirical testing (PT)
47. Relying upon the risk management practices you learned at university rather than regular continuing education on new risk
management techniques (T)
48. Reverse engineer a competitor’s technology with the intent to bring to market a nearly identical copy (E)
49. Seeking legal counsel about tort(liability) laws that might have an impact on your product (L)
50. Selling a product claiming high reliability based upon calculations but without extended field testing to back up the computational
models (PT)
51. Staying quiet about your company’s cover up of a significant design flaw (E)
52. Sub-contracting critical design work to a third-party (PFnD)
53. Take reported product malfunctions at face value (PT)
54. Taking credit for the work done by a colleague (E)
55. Use consumable work resources for home projects (E)
56. Using a new technology with better functionality but that has a higher failure rate than a current technology (PFnD)
57. Using an unknown component to perform a critical function because it less expensive than a known suitable component (PFnD)
58. Verifying that your product is in compliance with all applicable environmental, health, and safety laws and regulations (L)
59. When serving as an expert witness, letting your previous experience with one of the litigating companies influence your testimony
on the resolution of a dispute of a technical matter (E)
60. Withhold information from the general public about risks associated with a specific technology that is relevant to the public’s
health and welfare (E)
61. You are flexible about complying with engineering regulations (L)
Note: (E) = Ethics, (PT) = Product Testing, (PFnD) = Product Functionality and Design, (L) = Legal, (PnP) = Processes and
Procedures, (T) = Training.
... In the context of the present discussion, the risk attitudes of SOMs are important to understand in relation to their work on systems. We ascribe to the opinion of much of the literature, that risk attitude can be mapped on a utility function as a personality trait [25]. Humans are integral to the operation and maintenance of almost all systems in the Navy's inventory. ...
... This does not imply that a system cannot be designed without expressly addressing these factors, but rather available information on the psychological disposition of the SOMs should be incorporated to improve the system design process [6]. Methods already exist to understand the psychology [31] and risk attitudes of engineers designing systems [25], and incorporate those attitudes into design decision-making processes [32]. However, we are unaware of any existing methods or processes that explicitly consider the risk attitudes of SOMs in the context of operational availability of systems in general and especially in the context of Naval systems. ...
... For instance, the DOSPERT psychometric risk survey tests people for risk tolerance and risk aversion across personal domains of risk [33]. An extension to DOSPERT was made for engineers in their professional capacities [25]. Other methods such as choice lotteries are also available [34]. ...
Article
Full-text available
Systems engineering practices in the maritime industry and the Navy consider operational availability as a system attribute determined by system components and a maintenance concept. A better understanding of the risk attitudes of system operators and maintainers may be useful in understanding potential impacts the system operators and maintainers have on operational availability. This article contributes to the literature a method that synthesizes the concepts of system reliability, and operator and maintainer risk attitudes to provide insight into the effect that risk attitudes of systems operators and maintainers have on system operational availability. The method consists of four steps providing the engineer with a risk-attitude-adjusted insight into the system's potential operational availability. Systems engineers may use the method to iterate a system's design or maintenance concept to improve expected operational availability. If it is deemed necessary to redesign a system, systems engineers will likely choose new system components and/or alter their configuration; however, redesign is not limited to physical alteration of the system. Several other options may be more practical depending the system's stage in the life cycle to address low risk-adjusted operational availability such as changes to maintenance programs and system supportability rather than on component and system reliability. A simple representative example implementation is provided to demonstrate the method and discussion of the potential implications for Navy ship availability are discussed. Potential future work is also discussed.
... Axiomatic design is an example of a design method intended to enable engineering designers to manage uncertainty (Suh 2001). Existing research on uncertainty focuses on classifying uncertainty (for instance, De Weck, Eckert & Clarkson 2007), quantifying uncertainty (for instance, Wood & Antonsson 1990), designing under conditions of uncertainty (for instance, Yao et al. (2011), on uncertainty-based multidisciplinary design optimisation methods), managing design processes under conditions of uncertainty (Ramasesh & Browning 2014) and risk (for instance, Oehmen et al. 2014), on developing decision-making strategies or tools (for instance, Yang & Singh 1994), and on how individuals reason with uncertainty (for instance, Ball & Christensen 2009;Van Bossuyt et al. 2013). Design research is often concerned with the assessment of risk (see for example Möller & Hansson (2008) for a review of principles of safety engineering) and ensuring that design process (for instance, Wynn, Grebici & Clarkson 2011) and products (for instance, Leveson et al. 2006) are resilient to the risks throughout their product life cycles. ...
Article
Full-text available
Designing complex products involves working with uncertainties as the product, the requirements and the environment in which it is used co-evolve, and designers and external stakeholders make decisions that affect the evolving design. Rather than being held back by uncertainty, designers work, cooperate and communicate with each other notwithstanding these uncertainties by making assumptions to carry out their own tasks. To explain this, the paper proposes an adaptation of Kendall Walton’s make-believe theory to conceptualise designing as playing games of make-believe by inferring what is required and imagining what is possible given the current set of assumptions and decisions, while knowing these are subject to change. What one is allowed and encouraged to imagine, conclude or propose is governed by socially agreed rules and constraints. The paper uses jet engine component design as an example to illustrate how different design teams make assumptions at the beginning of design activities and negotiate what can and cannot be done with the design. This often involves iteration – repeating activities under revised sets of assumptions. As assumptions are collectively revised, they become part of a new game of make-believe in the sense that there is social agreement that the decisions constitute part of the constraints that govern what can legitimately be inferred about the design or added to it.
... Similar work has been conducted by McIntire in a mechanical design context [117]. This is also similar to work done on engineering risk attitudes [118,119]. ...
Conference Paper
Full-text available
Increasingly tight coupling and heavy connectedness in systems of systems (SoS) presents new problems for systems designers and engineers. While the failure of one system within a SoS may produce little collateral damage beyond a loss in SoS capability, a highly interconnected SoS can experience significant damage when one member system fails in an unanticipated way. It is therefore important to develop systems that are “good neighbors” with the other systems in a SoS by failing in ways that do not further degrade a SoS’s ability to complete its mission. This paper presents a method to (1) analyze a system for potential spurious emissions and (2) choose mitigation strategies that provide the best return on investment for the SoS. The method is suited for use during the system architecture phase of the system design process. A functional and flow approach to analyzing spurious emissions and developing mitigation strategies is used in the method. Use of the method may result in a system that causes less SoS damage during a failure event.
... There is also a distinction between risk attitude and risk perception in decision making. Risk attitude is decision maker's preference for risky actions such as design [5,15,28] and quantified by the difference between his/her certainty equivalence and expected utility, whereas risk perception is how risky an action is perceived as being. For instance, entrepreneurs do not actually have the risk attitude that people may believe they do, but they instead have an overly optimistic perception of the risks involved in their ventures. ...
Conference Paper
Full-text available
When customers decide which product to buy, the perceived risks associated with the purchase are typically part of consideration. The customer’s purchase decision is directly related to the perceived risk about finance, safety, reputation, or others. It is important to incorporate the customer’s perception of risks in user-centered product design. Existing research of risk perception in product development focuses on the warning label design to ensure that consumers are aware of product safety and potential hazards. There is limited work on how to design the product itself with the consideration of risk factors. In this research, the effects of risk perception from consumers on design are studied. The results show that the perceived product safety by customers can be independent from objective measurements of risks. The perceptions are influenced by individual experiences, information obtained from government regulations and standards, as well as personal characteristics. Design variables related to the levels of comfort, aesthetics, and performance for automobile could affect the customers’ perception about vehicle safety.
... The study of irrational behavior began with investigating irrational behavior of people such as in the context of economic models. 68 75 While some argue that humans are the only true source of irrational system behaviors, we are using the phrase "irrational system behavior" in a different context, as described above. However, examining the context of irrational behavior of humans is useful in conceptualizing how systems can appear to behave irrationally to an outside observer or even to the subject matter expert of the system behaving irrationally. ...
Article
Full-text available
System of interest (SoI) failures can sometimes be traced to an unexpected behavior occurring within another system that is a member of the system of systems (SoS) with the SoI. This article presents a method for use when designing an SoI that helps to analyze an SoS for unexpected behaviors from existing SoS members during the SoI's conceptual functional modeling phase of system architecture. The concept of irrationality initiators—unanticipated or unexpected failure flows emitted from one system that adversely impact an SoI, which appear to be impossible or irrational to engineers developing the new system—is introduced and implemented in a quantitative risk analysis method. The method is implemented in the failure flow identification and propagation framework to yield a probability distribution of failure paths through an SoI in the SoS. An example of a network of autonomous vehicles operating in a partially denied environment is presented to demonstrate the method. The method presented in this paper allows practitioners to more easily identify potential failure paths and prioritize fixing vulnerabilities in an SoI during functional modeling when significant changes can still be made with minimal impact to cost and schedule.
Article
Engineering design often requires engaging with users, clients, and stakeholders of products and systems. It is therefore important for designers to reflect on the societal and environmental implications of their design work so that they can design equitably, ethically, and justly. We conduct a review of three leading scholarly engineering design venues to investigate how, when, and why these terms – “ethics,” “equity,” and “justice,” and variations — appear in the engineering design literature and what scholars mean when they use them. We find that these terms are minimally present within the field's scholarship and posit that design researchers may be using other terms to refer to their work that is aligned with principles of ethics, equity, and justice. We find that the prevalence of these terms has increased over time and that the terms come up throughout various stages of the design process. There appear to be a variety of motivations for including these terms, notably, sustainability and education of the next generation of designers. Finally, we propose an expanded design justice framework that is specific to engineering design. We encourage designers in our field to adopt this framework to assist them in thinking through how their engineering design work can be used to advance justice.
Article
Different rationally behaving individuals can make different decisions even under same conditions. Concept of risk preference is one way of describing this discrepancy. Risk preference refers to the way decision-makers weight different risky options. Expected utility theory is a popular way to treat this weighting mathematically. Previous research in the field of structural health monitoring has shown that in a multi-stakeholder decision-making, discrepancies in risk preferences between the stakeholders can have large effects on the feasibility of structural health monitoring implementation. Results from that previous research obtained using expected utility theory showed that under certain conditions, differences in stakeholder risk attitudes can make the value of structural health monitoring system even negative. As the overall aim of structural health monitoring is to enable systematic data-based management of infrastructure, understanding of risk preferences and their effect on the structural health monitoring decision-making is of paramount importance. In this work, we created a risk preference assessment tool that is derived from Domain-Specific Risk-Taking scale. We propose the tool and explain its development and validation. The validation results indicate that the proposed assessment tool is technically sound and has high potential in the future valuation of human factors influencing decision-making based on structural health monitoring.
Article
Full-text available
In complex products the values of parameters are rarely exactly the required values, rather they often have a margin that might be designed in deliberately or be the incidental results of other design decisions. These margins play a critical role in design processes in managing engineering change and iteration. While engineers often talk about margins informally, designers and researchers also use other terms for specific margin concepts. This paper reviews the existing literature on related concepts and defines margins formally. It discusses the role margins play in handling uncertainty by distinguishing between buffer and excess. Buffer deals with uncertainty and excess with the remaining overcapacity of the design. Buffer can transition into excess of the design solution if the uncertainty can be reduced. The concepts are applied to the temperature margins of several candidate materials for a non-rotary jet engine component. This shows that a clear understanding of margins can help a company to select design alternatives.
Article
Full-text available
This study examined the influence of a small-scale service learning experience on engineering undergraduates. The aim of this study is to test the potential of service learning to develop social responsibilities, soft-skill competencies, engineering design self-efficacy and risk attitude of engineering students. Our results suggested that conducting a service learning flood risk reduction initiative increase learning and help to develop the necessary hard and soft skills to address community needs.
Chapter
Full-text available
Triggered by the seminal work of Ward Edwards in the 1950s, the assumption of rationality in decision making as defined in the rational expectations model embraced by economics and other social sciences has come under scrutiny over the last fourty years. Closer inspection has frequently found it lacking in descriptive accuracy. As Edwards pointed out as early as 1954, “it is easy for a psychologist to [show] that an economic man… is very unlike a real man” (p. 382). The general public tends to agree and has not been surprised by the failure of this type of rationality to explain behavior. Thus the novelist Smiley (1995) has one of her characters describe a microeconomics lecture on the topic as “rollicking tales about an entirely alien planet, the Bizarro Planet, home of Bizarro Superman” (p. 141).
Article
Full-text available
Article
The concept of function offers significant potential for transforming thinking and reasoning about engineering design as well as providing a common thread for relating together product risk information. This paper focuses specifically on risk data by examining how this information is addressed for a design team conducting early stage design for space missions. A fundamental set of risk elements is proposed based on a linguistic analysis of the risk information needs of the design team. Sample risk statements are then decomposed into a set of key attributes that are used to scrutinize the risk information using three approaches from the pragmatics sub-field of linguistics: (1) Gricean, (2) Relevance Theory, and (3) Functional Analysis. Based on the deficiencies identified in this analysis, a format for the communication of risk data by explicitly accounting for five risk attributes developed in this work is formulated.
Article
A variety of response effects that had been found previously in interview surveys were tested in a mail survey of a heterogeneous local population. These included experiments on question order response order, no-opinion filters, middle-response alternatives, and acquiescence. The results generally supported earlier findings based on student samples which had shown that order efects were eliminated in self-administered surveys but that question-form effects occurred as in interview surveys. One question-order effect, however, was found in the mail survey, and a type of response-order effect (a primacy effect) that had not been previously tested also occured. Interactions between education and response effects that had sometimes been found in interview surveys were not present in the mail survey.