ArticlePDF Available

The Job Selection Problem for Career Starters: a Decision-Theoretical Application Part 2: Identifying the Best Alternative using the ENTSCHEIDUNGSNAVI

Authors:

Abstract and Figures

This article describes a practical application of decision theory in two parts. In the first part, the problem was fundamentally structured in the form of a result matrix. In this second part, using the web tool ENTSCHEIDUNGSNAVI, it will be shown how a practical determination of unbiased resulting objectives, probabilities and preferences based on the Multi Attribute Utility Theory can be made such that the best of the set of action alternatives will be identified. Short text: After graduation the question arises for the graduates with which job they want to start their further career. It is shown that web tool ENTSCHEIDUNGSNAVI supports decision-making. Based on an already pre-structured decision situation (result matrix) the tool elicits necessary preferences and further parameters as objectively and realistically as possible in order to select the best alternative for action
Content may be subject to copyright.
@von Nitzsch and Siebert
1
Scientific contributions
The Job Selection Problem for Career Starters: a
Decision-Theoretical Application
Part 2: Identifying the Best Alternative using the ENTSCHEIDUNGSNAVI
Prof. Dr. Rüdiger von Nitzsch and FH-Prof. PD Dr. habil. Johannes Siebert
Prof. Dr. Rüdiger von Nitzsch is Head of the Research Area Decision Research and Financial Services at the RWTH
Aachen University. FH-Prof. PD Dr. habil. Johannes Siebert holds the Professorship for Supply Chain Management at the
Management Center Innsbruck and is Private Lecturer at the University of Bayreuth.
This article describes a practical application of decision theory in two parts. In the
first part, the problem was fundamentally structured in the form of a result matrix. In
this second part, using the web tool ENTSCHEIDUNGSNAVI, it will be shown how a practical
determination of unbiased resulting objectives, probabilities and preferences based on the
Multi Attribute Utility Theory can be made such that the best of the set of action
alternatives will be identified.
Short text: After graduation the question arises for the graduates with which job they
want to start their further career. It is shown that web tool ENTSCHEIDUNGSNAVI supports
decision-making. Based on an already pre-structured decision situation (result matrix) the
tool elicits necessary preferences and further parameters as objectively and realistically
as possible in order to select the best alternative for action
Keywords: decision, utility functions, weights, debiasing, incomplete information
1. From problem structuring to the final decision
In the first part of this article, the decision problem "The right job for Peter" was structured
and in a result matrix. This second part will show what steps are necessary to allow Peter to
identify the best alternative for him from the set of all identified alternatives. The Multi
Attribute Utility Theory (MAUT, Keeney and Raiffa 1976, see Eisenführ et al., 2010, pp. 318,
in German) will serve as framework to elicit and aggregate Peter`s preferences. The decision is
illustrated with the decision support tool ENTSCHEIDUNGSNAVI, which was developed by the
@von Nitzsch and Siebert
2
authors (by Nitzsch, 2017, pp. 324) and is freely accessible to anyone interested in
www.entscheidungsnavi.de.
After an in-depth problem structuring, as was done in the first part of this case study, all
objectives are clearly formulated, all potentially possible alternatives are identified, and their
effects are quantified in the measurement scales defined for the objectives. As there are also
uncertainties in the impact predictions, the relevant uncertainty factors as well as possible states
with the associated effects in the respective alternatives were defined, too.
In order to make a decision on this basis, Peter has three tasks to do. He must
1. indicate probabilities,
2. make relative benefit assessments for the results of an alternatives in all
objectives, and
3. quantify (i.e. weigh) the different relevance of the stated objectives.
These tasks are subjective assessments that can rarely be made exactly. A well-grounded
approach to identifying the best alternative should therefore explicitly take this lack of
exactness accuracy into account. Psychological research revealed a number of factors that
distort estimates and therefore require so-called “debiasing”. Therefore, in the design of the
ENTSCHEIDUNGSNAVI these two problems are explicitly considered.
2. Specification of probabilities and debiasing
In Table 4 of the first part, three factors for which the results are uncertain (short:
uncertainty factors) have been identified for which Peter has to disclose discrete probability
distributions about the respective states that in the large business consultancy exist, namely
“extent of actual job replenishment”, “startup success” and “work climate”.
With respect to the uncertainty factor “position upgrading”, Peter has to specify concrete
probabilities for the three states “no upgrade”, upgrade to a ¾-position after one year” and
“upgrade to a full position after one year”. Since there is hardly any usable data to derive
objectively probabilities, Peter has to give a very subjective estimate, which is subject to high
uncertainty. It is a difficult task for him to provide crisp numbers. Therefore, the
ENTSCHEIDUNGSNAVI allows a certain inaccuracy so that it is easier for him to specify any
probabilities at all. This is achieved by associating relative uncertainties to the probabilities.
For example, Peter sets the probabilities for “no upgrade” at approximately 20% +/- 10% (i.e.
between 10% and 30%), for “upgrade to a ¾-position after one yearto 30% (between 20% and
40%) and for “upgrade to a full position after one yearto 50% (between 40% and 60%).
@von Nitzsch and Siebert
3
The articulation of probabilities can lead to bias. In order to counteract such distortions,
Peter is confronted in the ENTSCHEIDUNGSNAVI with the typical mistakes that decision-makers
make in such estimation tasks (see in detail Montibeller and von Winterfeldt, 2015, pp. 1230).
He is informed that it can be dangerous to rely only on his intuition, especially if his experience
with regard to the uncertainty factor to be estimated is low. Due to simplified thought patterns,
this can lead to different distortions (Kahneman, 2011, pp. 185). An example of this is the
overestimation of the probability of a seemingly plausible scenario consisting of several
individual events in comparison to the assessment of the respective individual probabilities
(conjunction fallacy). At the same time, he is made aware that sometimes people overreact to a
certain events if they intensively thought about this event (availability bias). In addition, he is
informed about the narrative bias: people rashly conclude from individual, well-known stories
that these are generally true. Peter realizes that this bias could influence him, because he (rashly)
has believed in a high chance of upgrading the job since this happened recently to a friend of
his. However, in the case of his friend, there were special circumstances and Peter realizes from
those explanations that one cannot derive general statements from the individual case of his
lucky friend. Therefore, he corrects the probability of replenishment to a full position of 50%
to only 30% and increases the probability of not getting a full position from 20% to 40%. Table
1 summarizes the information provided by Peter at the end of the probability estimation for all
uncertainty factors.
Uncertainty factor
Possible environmental
conditions
Expected value of
probabilities
Uncertainty
without
40 %
Upgrading of the job
¾-job after one year
30 %
each +/- 10 %
full job after one year
30 %
unsuccessful
66 %
Success of the start-up
moderate success
22 %
each +/- 5 %
great success
12 %
bad
60 %
Working atmosphere
medium
25 %
each +/- 5 %
good
15 %
Table 1: Peter’s listing of expected values and possible ranges
Distortions, however, not only occur in probability estimates of the more or less probable
results, but also are basically common to all estimates, i.e. also the reliable earnings estimates.
In this respect, the ENTSCHEIDUNGSNAVI draws attention to the well-known psychological
pitfalls in estimating values and appropriate debiasing recommendations are made in order to
@von Nitzsch and Siebert
4
reduce possible distortions. There is a great danger, for example, when decision-makers have
themselves already “committed” in certain projects, i.e. have invested effort and money or are
responsible for the project, and in view of this are strongly biased- usually unconsciously- for
continuing the project and therefore do forecast optimistic results. However, Peter´s project is
not affected by sunk or lost costs. However, he feels affected by another factor. In fact, in the
past he had intensively thought about who to develop his further career path after getting a PhD-
degree at the university. Thinking in such “success scenarios” often leads to overlooking various
reasons for possible failures and to too optimistic results (overconfidence). The
ENTSCHEIDUNGSNAVI therefore causes him to think ten years into the future and to imagine that
a career path through a doctorate would have failed (prospective hindsight method, see von
Nitzsch, 2017, p. 328). With such a procedure, it is usually easier for people to take the pitfalls
more into account and give assessments that are more realistic. In fact, Peter reduces his initial
rating “excellent” for the possibilities of professional development to “very good”.
3. Utility assessments for each objective
For deriving a decision or a ranking of the alternatives from a completely defined result
matrix, the preferences of the decision maker have to be elicited and modeled. Within the
MAUT, the preference model of a decider consists of three components:
value preferences, i.e. an assessment of the different levels of results in each objective
criterion,
risk preferences for each objective criterion, and
weights for objectives, i.e. an assessment of the varying importance of all mentioned
objective.
Value and risk preferences are linked in the MAUT and mapped together in the concept of
the Bernoulli utility function u. Altogether m corresponding utility functions ui for i ϵ (1,..., m)
are to be determined for m objectives. With these utility functions, each x in the result matrix
can be converted into a utility entry u(x), whereby this result is normalized to the interval
between 0 (for the worst result) and 1 (for the best result).
In the case of objectives that are measured numerically on scale levels ordinal and higher,
the decision maker usually determines utility functions that enable a transformation into utility
values between 0 and 1 for all possible consequences in the defined range. In this case, very
simple and “smooth” functional processes are sufficient if fundamentality has been highly
emphasized in the formulation of objectives. In the ENTSCHEIDUNGSNAVI, therefore,
@von Nitzsch and Siebert
5
exponential utility functions are assumed in which different preference profiles of decision
makers can simply be differentiated taking into account a risk aversion parameter c. If x is the
worst and x+ the best value of the interval of possible results in an objective, the following form
of the utility function u is assumed:
 

 


Figure 1 can explicate the way in which Peter determines the utility function of the „Income
(for the next three years) objective using the ENTSCHEIDUNGSNAVI.
Figure 1: Determining a utility function using the ENTSCHEIDUNGSNAVI
Peter's task is basically to find exactly the function that best reflects his preferences by
trying out different curvatures of the utility function. The graphical representation of the utility
function in the diagram on the left helps him for instance to visually grasp the extent of
diminishing marginal utility. At the same time, on the right side, there are additional verbal
explanations of the utility function, too. One can choose between four variants, whereby in
Figure 1 an example with the variant II was chosen, i.e. an interpretation in that a potential
outcome is compared to a 50% -50% lottery (halving method,). The variant III is similar, except
that the probabilities are varied here and not the safety equivalent as in variant II (variable
probability method). In Variant I, marginal utility increases are presented in a risk-free context,
@von Nitzsch and Siebert
6
and variation IV concretely displays the values of the risk aversion parameters c assumed in the
functions. Using the "Level" and "Width" sliders, one can specify different intervals for values
to which the respective verbal interpretations refer. Overall, therefore, there are enough
possibilities for Peter to check whether the specified function actually reflects his own
preferences well.
With the buttons "Accurate" and "Inaccurate", Peter can also specify how exactly he can or
would like to narrow down his preferences. Ideally, a utility function could be specified exactly,
and then the diagram would show only a single function. The higher he chooses the inaccuracy,
the further apart lay the two limiting utility functions. As shown in Figure 1s, Peter chooses an
accuracy that limits the security equivalent to the lottery specified in the statement text (50%,
€ 100k, 50%, € 225k) to between € 135k and € 141k.
For objectives measured by a verbal scale, the domain of definition of a utility function
refers only to the possible, appropriately defined consequences. In the ENTSCHEIDUNGSNAVI,
therefore, numerical functions are not determined for objectives with verbal scales; instead, the
user directly queries corresponding point scores for each possible expression in an interval
between 0 and 100. The worst value is given 0 points, the best 100. The transformation into
(normalized) utility values is done in this direct rating method by dividing the point numbers
by 100. Again, Peter is allowed to include a degree of inaccuracy by specifying a degree of
precision. Table 2 shows Peter's finally specified benefit assessments and levels of precision in
the six objectives.
Nr.
Objective
Risk aversion parameter c (for numerical scales) and. point
values for possible items??? (for linguistic Scales)
1
Income in the next three years
Risk aversion parameter c lays between 3 and 4
2
Joy at work
Point values: 0 (none), 32 (little), 59 (medium), 90 (much), 100
(very much;) bandwidth 5
3
Opportunities for professional development
Point values: 0 (very bad), 25 (bad), 50 (medium), 75 (good), 90
(very good), 100 (excellent); bandwidth 0
4
Theoretically available leisure activities
Risk aversion parameter c lays between 1,5 and 2,1
5
Total amount of usable time for leisure activities
Risk aversion parameter c lays between 1,8 and 2,4
6
Attractiveness of the housing situation
Point values: 0 (extremely bad), 30 (lo), 70 (medium), 100
(high); bandwidth 2
Table 2: Results of all benefit assessments by Peter
4. Weighting of objectives
@von Nitzsch and Siebert
7
In addition to the objective-specific benefit assessments, the different importance of the
objectives is expressed in the preference model of the MAUT in the form of so-called weights
of the objectives. Formally, the modeling is as follows: Let xij be the result of the alternative x
in the objective criterion i and the state j there, and according to pij the probability of the
associated state, then the total utility of alternative x is given by

 

where wi denotes the weights objectives. The weights are normalized be setting their sum
equal to 1. Given the existence of tradeoffs, it is clear that changing objective weights can easily
lead to changes in the assessment of the relative benefits of alternatives. In this respect, the
weights of the objectives are critical parameters, and great attention should always be paid to
their careful determination.
In some practical applications of such additive evaluation models, however, there is an
astonishingly easy handling in determining weights of objectives. Thus, not infrequently, a
general question is asked about the “importance” of the objectives, without taking into account
that the influence of the wi crucially depends on which bandwidth [x-; x +] the objective-specific
ratings are normalized. The smaller the bandwidth in a destination, the lower must - ceteris
paribus - the objective weight be in this destination. Failure to do so will result in a decision-
making tool that is not well founded.
Therefore, in order to avoid this problem, the procedure in the ENTSCHEIDUNGSNAVI derives
the weights of the objectives from the questionnaire on tradeoffs between objectives. The
decision maker has to indicate how much better a result in a considered objective must be such
that a deterioration in another objective is exactly compensated. The parameters wi can then be
determined from such tradeoffs.
Peter must specify a reference objective in the tool for this purpose, for which he determines
tradeoffs with all other objectives. A suitable reference objective is always a rather important,
numerically measured objective, as this facilitates the process of specifying tradeoffs. Peter
decides on the income objective.
@von Nitzsch and Siebert
8
Figure 2: Determination of tradeoffs between income and opportunities for professional development
Figure 2 shows, using as example the tradeoff “income” vs. "career development
opportunities", in what form the tradeoffs in the ENTSCHEIDUNGSNAVI are elicited. For this
purpose, Peter varies the relative objective weight for the development objective until he can
accept the displayed indifference curve or, respectively, the verbal explanations as a reflection
of his preference. Similar to the determination of the utility functions, a graphical representation
is offered, here in the form of indifference curves, and three verbal ones, which are derived
from the indifference curve. In addition, if required, Peter can also use the additional regulator
switches to change the bandwidths considered in the tradeoffs. In variant II, the tradeoff is
explicitly explained by providing combinations of items that Peter either judges to be equal in
the case of exact values, or uses to reduce the bandwidth in the case of inaccurate values. In the
somewhat simpler variant, I, instead of comparing combinations of bandwidths only respective
differences in the two objectives are compared. Variant III displays the values of the yet non-
normalized weights of the two objectives under consideration that are assumed in the
calculation tool. In the example of Peter, this results in a weight for the objective "career
development opportunities" of at least 50 and at most 60, while the weight of the objective
"income" as a reference objective is set at 100.
In the case of m objectives, m1 of such trade-offs are to be determined in this way; in the
example, these are a total of five trade-offs for the six defined objectives. Table 3 shows clearly
the parameters finally determined by Peter.
@von Nitzsch and Siebert
9
Tradeoff between objectives
Non-
normalized
weights
normalized weights
(Sum = 1)
Precision (related to
non- normalized
weight)
Maximize income
100
0,395
Maximize pleasure on the job
32
0,126
4
Maximize career development opportunities
55
0,217
5
Maximize leisure opportunities
21
0,083
0
Maximize usable time for leisure activities
30
0,119
7
Maximize attractiveness of the housing situation
15
0,059
6
Table 3: Parameters specified by Peter in the objective weighting
5. Identifying the best alternative
After the determination of all parameters of the additive MAUT preference model -
following the theory -, the best alternative is found quickly: For all alternatives, the resulting
utility values are calculated, and the decision maker should then choose the alternative with the
highest utility value. Ignoring the uncertainties Peter has given in several steps, a corresponding
result can be deduced quickly in this example as well. For this purpose, it is assumed in the
ENTSCHEIDUNGSNAVI that if uncertainties are associated with a parameter entered by the user,
the respective expectation value is included in the calculation of expected utility values. This
applies to the probabilities, the point ratings in the direct rating procedure for verbal
expressions, the risk aversion parameter c for numerically measured utility functions and for
the (non-normalized) weights of the objectives. In such a calculation, the alternative "research
assistant (with a possible position upgrading)" with a benefit expectation of 0.8211 wins just
ahead of "trainee in the Eifel company" with 0.8152, as shown in Table 4.
Utility
Income of the
next three years
(T €)
Enjoyment
at
work(linguis
tic)
Opportunities
for further
professional
development(li
nguistic)
Leisure
opportun
ities
(grade)
Usable time for
leisure time.
(%)
Attractive
ness of the
housing
situation(li
nguistic)
Research.
assistant
(possible
position
upgrading)
0,8211
75 T€ to 125 T€
(Position
upgrading)
very much
very good
B
30 % to 60 %
(Position
upgrading)
medium
@von Nitzsch and Siebert
10
Trainee position
in a company in
the Eifel
0,8152
140 T€
much
good
B
40 %
high
Research.
assistant
(half job)
0,8013
75 T€
very much
very good
B
60 %
medium
Small consulting
firm near
Aachen
0,7284
140 T€
much
medium
D
30 %
medium
Big consulting
firm down south
0,7125
200 T€
none
much
(working
atmosphere)
excellent
D
10 %
extremely
bad
Department
office in a
company in the
Eifel
0,6184
120 T€
little
very bad
A
70 %
high
Start-up
0,3963
0 to 250 T€
(Success of the
Start-ups)
very much
bad to very
good (Success
of the Start-
ups))
E
0 %
medium
Table 4: Utilities and ranking of the alternatives
However, this is only a first, quick result, as mentioned above without taking into account
the uncertainties explicitly indicated by Peter in his preferences. So it could well be that other
than the imputed averages, which would be quite possible in the context of uncertainties, results
in a different ranking of alternatives and, for example, the trainee position is better than the job
as a research assistant with attempted replenishment. Therefore, Peter is not quite convinced
yet.
In order to find out more about the effects of the specified inaccuracies, Peter carries out a
so-called robustness test with a Monte Carlo simulation in the ENTSCHEIDUNGSNAVI, in which
random draws are made from the values permitted within the specified degree of precision.
With these drawings, the utility values of all alternatives are then calculated unambiguously and
the resulting ranking of the alternative and the individual utility values is derived in each case.
After a large number of corresponding random draws, in which both the utility values and the
rankings are saved each time, Peter now considers the evaluation according to Figure 3.
@von Nitzsch and Siebert
11
Figure 3: Results of the Monte Carlo simulation for the robustness check
On the right side, the intervals of the utility values of the alternatives that resulted in the
simulation are shown. This gives Peter a first impression of the fluctuation range of the ratings
depending on the possible exact parameter choice. From this analysis alone, Peter could derive
statements about possible rankings between alternatives. For example, it can be seen that the
fluctuation range of alternative “research assistant; ½ position” in third place is always higher
than the fluctuation range of alternative “department office in a company in the Eifel in fourth
place. This means that there is no parameter constellation, so alternative “research assistant; ½
position” will get a higher utility value than “department office in a company in the Eifel. The
ranking order between these two alternatives is therefore undisputed. However, Peter is less
interested in that now.
Important for him is the comparison of the two best alternatives “research assistant with
opportunity to upgrade” and “trainee position in a company in the Eifel, in which a different
picture is seen, namely, the fluctuation ranges overlap here such that a clear ranking does not
result. It is precisely for this case that the determined frequencies, and how frequent an
alternative was at a particular ranking position, are of crucial importance. Thus, the analysis
shows that in 83% of all 10,000,000 cases diced by the Monte Carlo simulation, the scientific
staff had the highest utility value, and thus in these cases was better than the trainee position.
Only in 17% of the simulations was the order of the evaluation different. This comparatively
robust advantage for the post as a research assistant attempting to increase is also reflected in
the ranking Score, which is calculated by the relative frequencies weighted by the
@von Nitzsch and Siebert
12
ENTSCHEIDUNGSNAVI. After this result, Peter is no longer uncertain and looking forward to his
job at the university.
6. Conclusion
The present case study showed how sound decision-making based on a value-focused
thinking approach and a gross modeling of preferences can be meaningfully carried out not only
in scientific theory but also in practice. Here, three findings can be recorded
First, it is worth noting that it is worthwhile to structure a decision problem that has to be
resolved very carefully. This applies not only to the present case study, in which the two
ultimately best alternative courses of action emerged at this stage, but also to practical decision-
making problems in business and politics, which are also reported in various case studies (see,
e.g., Keeney, 2012, pp. 303).
Second, it must be emphasized that any decision analysis also requires a very careful
analysis of psychological bias factors that should be reduced by appropriate debiasing methods.
Again, this is not just a piece of advice from academia, but in business practice, corresponding
debiasing applications are increasingly being found (see, e.g., Scherpereel et al., 2015, pp. 32
ff., and Kahneman et al., 2011, pp. 51).
Third, the authors hope to have shown that multiattributive utility theory is not only a means
to annoy decision-theory students in university lecture halls with ivory tower considerations,
but that scientific foundations and practical benefits can be reconciled in implementations such
as the ENTSCHEIDUNGSNAVI's.
Literature
Eisenführ, F., Weber, M., Langer, T., Rationales Entscheiden, Berlin 2010.
Kahneman, D., Thinking, fast and slow, New York 2011.
Kahneman, D., Lovallo, D., Sibony, O., Before You Make That Big Decision, in: Harvard Business Review, Juni
2011, 5160.
Keeney, R. L., Raiffa, H., Decisions with multiple objectives: preferences and value tradeoffs. New York, 1976.
Keeney, R. L., Value-Focused Brainstorming, in: Decision Analysis, Vol. 9 (2012), 303313.
Montibeller, G., von Winterfeldt, D., Cognitive and Motivational Biases in Decision and Risk Analysis, in: Risk
analysis: An official publication of the Society for Risk Analysis, Vol. 35 (2015), 12301251.
Scherpereel, P.; Gaul, J.; Muhr, M., Entscheidungsverhalten bei Investitionen steuern, in: Controlling &
Management Review, Sonderheft 2-15 (2015), 3238.
von Nitzsch, R., Entscheidungslehre Der Weg zur besseren Entscheidung, Aachen 2017.
Article
Full-text available
Decision sciences are in general agreement on the theoretical relevance of decision training. From an empirical standpoint, however, only a few studies test its effectiveness or practical usefulness, and even less address the impact of decision training on the structuring of problems systematically. Yet that task is widely considered to be the most crucial in decision-making processes, and current research suggests that effectively structuring problems and generating alternatives—as epitomized by the concept of proactive decision making—increases satisfaction with the decision as well as life satisfaction more generally. This paper empirically tests the effect of decision training on two facets of proactive decision making—cognitive skills and personality traits—and on decision satisfaction. In quasi-experimental field studies based on three distinct decision-making courses and two control groups, we analyze longitudinal data on 1,013 decision makers/analysts with different levels of experience. The results reveal positive training effects on proactive cognitive skills and decision satisfaction, but we find no effect on proactive personality traits and mostly non-significant interactions between training and experience. These results imply the practical relevance of decision training as a means to promote effective decision making even by more experienced decision makers. The findings presented here may be helpful for operations research scholars who advocate for specific instruction concerning proactive cognitive skills in courses dedicated to decision quality and/or decision theory and also for increasing, in such courses, participants’ proactive decision making and decision satisfaction. Our results should also promote more positive decision outcomes.
Article
Full-text available
Brainstorming can be a useful technique to create alternatives for complex decisions. To enhance the quality and innovativeness of the created alternatives, value-focused brainstorming incorporates two features of value-focused thinking into the traditional brainstorming procedures. First, it explicitly identifies the valued aspects of potential alternatives, specified as distinct objectives, to guide brainstormers to create alternatives of greater value. Second, all participants in a brainstorm individually create alternatives prior to any anchoring on group discussions, which will enhance getting the full range of each individual's thoughts articulated. Concepts and procedures of the approach are discussed. A public policy application, done to address recommendations following the World Trade Center disaster, illustrates the approach by creating alternatives to improve emergency evacuation of individuals from large buildings.
Article
Full-text available
When an executive makes a big bet, he or she typically relies on the judgment of a team that has put together a proposal for a strategic course of action. After all, the team will have delved into the pros and cons much more deeply than the executive has time to do. The problem is, biases invariably creep into any team's reasoning-and often dangerously distort its thinking. A team that has fallen in love with its recommendation, for instance, may subconsciously dismiss evidence that contradicts its theories, give far too much weight to one piece of data, or make faulty comparisons to another business case. That's why, with important decisions, executives need to conduct a careful review not only of the content of recommendations but of the recommendation process. To that end, the authors-Kahneman, who won a Nobel Prize in economics for his work on cognitive biases; Lovallo of the University of Sydney; and Sibony of McKinsey-have put together a 12-question checklist intended to unearth and neutralize defects in teams' thinking. These questions help leaders examine whether a team has explored alternatives appropriately, gathered all the right information, and used well-grounded numbers to support its case. They also highlight considerations such as whether the team might be unduly influenced by self-interest, overconfidence, or attachment to past decisions. By using this practical tool, executives will build decision processes over time that reduce the effects of biases and upgrade the quality of decisions their organizations make. The payoffs can be significant: A recent McKinsey study of more than 1,000 business investments, for instance, showed that when companies worked to reduce the effects of bias, they raised their returns on investment by seven percentage points. Executives need to realize that the judgment of even highly experienced, superbly competent managers can be fallible. A disciplined decision-making process, not individual genius, is the key to good strategy.
Article
Behavioral decision research has demonstrated that judgments and decisions of ordinary people and experts are subject to numerous biases. Decision and risk analysis were designed to improve judgments and decisions and to overcome many of these biases. However, when eliciting model components and parameters from decisionmakers or experts, analysts often face the very biases they are trying to help overcome. When these inputs are biased they can seriously reduce the quality of the model and resulting analysis. Some of these biases are due to faulty cognitive processes; some are due to motivations for preferred analysis outcomes. This article identifies the cognitive and motivational biases that are relevant for decision and risk analysis because they can distort analysis inputs and are difficult to correct. We also review and provide guidance about the existing debiasing techniques to overcome these biases. In addition, we describe some biases that are less relevant because they can be corrected by using logic or decomposing the elicitation task. We conclude the article with an agenda for future research. © 2015 Society for Risk Analysis.
Entscheidungslehre -Der Weg zur besseren Entscheidung
  • R Von Nitzsch
von Nitzsch, R., Entscheidungslehre -Der Weg zur besseren Entscheidung, Aachen 2017.