Book

An Introduction to Decision Theory

Authors:

Abstract

This introduction to decision theory offers comprehensive and accessible discussions of decision-making under ignorance and risk, the foundations of utility theory, the debate over subjective and objective probability, Bayesianism, causal decision theory, game theory, and social choice theory. No mathematical skills are assumed, and all concepts and results are explained in non-technical and intuitive as well as more formal ways. There are over 100 exercises with solutions, and a glossary of key terms and concepts. An emphasis on foundational aspects of normative decision theory (rather than descriptive decision theory) makes the book particularly useful for philosophy students, but it will appeal to readers in a range of disciplines including economics, psychology, political science and computer science.
... There is a growing interest in the logical foundations, computational implementations, and practical applications of single-agent sequential decision-making (SSDM) problems [32,23,28,18,9,26,24] in such diverse areas as Artificial Intelligence, Control, Logic, Economics, Mathematics, Politics, Psychology, Philosophy, and Medicine. Making decisions is central to agents' routine and usually, they need to make multiple decisions over time. ...
Preprint
Full-text available
We consider decision-making and game scenarios in which an agent is limited by his/her computational ability to foresee all the available moves towards the future - that is, we study scenarios with short sight. We focus on how short sight affects the logical properties of decision making in multi-agent settings. We start with single-agent sequential decision making (SSDM) processes, modeling them by a new structure of "preference-sight trees". Using this model, we first explore the relation between a new natural solution concept of Sight-Compatible Backward Induction (SCBI) and the histories produced by classical Backward Induction (BI). In particular, we find necessary and sufficient conditions for the two analyses to be equivalent. Next, we study whether larger sight always contributes to better outcomes. Then we develop a simple logical special-purpose language to formally express some key properties of our preference-sight models. Lastly, we show how short-sight SSDM scenarios call for substantial enrichments of existing fixed-point logics that have been developed for the classical BI solution concept. We also discuss changes in earlier modal logics expressing "surface reasoning" about best actions in the presence of short sight. Our analysis may point the way to logical and computational analysis of more realistic game models.
... In contrast, decisions that determine what to do (i.e., decisions to φ) are referred to as practical decisions-e.g., when a judge decides that the defendant should be paroled due to their favorable risk profile. It is the latter, practical notion of decision-making that is generally presupposed in decision theory (see Peterson, 2009). Moreover, it is possible to go a step further and argue that the term "cognitive decision" is a misnomer. ...
Article
Full-text available
Despite growing interest in automated (or algorithmic) decision-making (ADM), little work has been done to conceptually clarify the term. This article aims to tackle this issue by developing a conceptualization of ADM specifically tailored to organizational contexts. It has two main goals: (1) to meaningfully demarcate ADM from similar, yet distinct algorithm-supported practices; and (2) to draw internal distinctions such that different ADM types can be meaningfully distinguished. The proposed conceptualization builds on three arguments: First, ADM primarily refers to the automation of practical decisions (decisions to φ) as opposed to cognitive decisions (decisions that p). Second, rather than referring to algorithms as literally making decisions, ADM refers to the use of algorithms to solve decision problems at an organizational level. Third, since algorithmic tools by nature primarily settle cognitive decision problems, their classification as ADM depends on whether and to what extent an algorithmically generated output p has an action triggering effect—i.e., translates into a consequential action φ. The examination of precisely this p-φ relationship, allows us to pinpoint different ADM types (suggesting, offloading, superseding). Taking these three arguments into account, we arrive at the following definition: ADM refers to the practice of using algorithms to solve decision problems, where these algorithms can play a suggesting, offloading, or superseding role relative to humans, and decisions are defined as action triggering choices.
... A recurrent problem faced in a vast range of real-world phenomena is decision-making. In this sense, decision theory intends to formulate accurate hypotheses where the final outcome depends on the decisions that a rational agent makes for completing a certain task, trying to find optimal solutions that generate the maximum possible profit [30,46]. Thus, decision-making is interpreted as the process of choosing an alternative, either following exact or heuristic procedures, and depending on the frequency of the action it can be classified as operational (short), strategic (long), or politic (very long) [39,43]. ...
Preprint
Full-text available
We study a prototypical non-polynomial decision-making model for which agents in a population potentially alternate between two consumption strategies, one related to the exploitation of an unlimited but considerably expensive resource and the other a comparably cheaper but restricted and slowly renewable source. In particular, we study a model following a Boltzmann-like exploration policy, enhancing the accuracy at which the exchange rates are captured with respect to classical polynomial approaches by considering sigmoidal functions to represent the cost-profit relation in both exploit strategies. Additionally, given the intrinsic timescale separation between the decision-making process and recovery rates of the renewable resource, we use geometric singular perturbation theory to analyze the model. We further use numerical analysis to determine parameter ranges for which the model undergoes bifurcations. These bifurcations, being related to critical states of the system, are relevant to the fast transitions between strategies. Hence, we design controllers to regulate such rapid transitions by taking advantage of the system's criticality.
... There are also differences between decisions under risk, decisions under ignorance, and decisions made under uncertainty. The probability of possible outcomes is known in a decision under risk, while in decisions under ignorance, there is a lack of or limited information about possible outcomes and their probabilities (Erfani et al., 2020;Peterson, 2009). Decisions under uncertainty take place under unknown or ambiguous probabilities of possible outcomes because of limited historical data, poor understanding of complex systems and consequently limited information, unpredictable events, and/ or limitations in expertise in assessing probabilities (Hipel & Ben-Haim, 1999;Thunnissen, 2003). ...
Article
Full-text available
Water security has become increasingly significant in Central Asian republics due to the uncertainties and risks associated with climate change and growing water demand. This importance is amplified by the complex transboundary river basins, interconnected water-energy infrastructure in the Aral Sea basin, rising water distribution disputes among riparian countries, and the vulnerability of water resources to climate change. Effective decision-making at all levels regarding water resource allocation and management is assumed to contribute to achieving water security. This research paper focuses on exploring the sources of endogenous uncertainty in managing water resources with case studies of Kazakhstan and the Kyrgyz Republic. The study adopts qualitative methods, specifically content and narrative analysis, to gather and analyze data from two interview phases (involving decision-makers at national and local levels and university representatives), academic literature, and policy reports. The research emphasizes endogenous uncertainties arising from the decision-making system and identifies key factors to mitigate them, including improved data availability and analysis, resilient infrastructure, and enhanced capacity. The study acknowledges the potential rise in exogenous uncertainties caused by limited transboundary cooperation, climate change impacts, and growing water needs. It highlights the significance of recognizing and comprehending the nature and effects of uncertainties. By doing so, Central Asian countries can make more informed decisions and work towards achieving sustainable and resilient water security in the region.
... In this paper, we develop a normative theory of rational learning in this setting. The standard theory for rational decision making under uncertainty is Bayesian decision theory (BDT) ( [24,12]; for contemporary overviews, see [19,29]). The main ideas of this paper are motivated by a specific shortcoming of BDT: the assumption that the agent who is subject to BDT's recommendations is logically omniscient and in particular not limited by any computational constraints. ...
Preprint
Full-text available
The dominant theories of rational choice assume logical omniscience. That is, they assume that when facing a decision problem, an agent can perform all relevant computations and determine the truth value of all relevant logical/mathematical claims. This assumption is unrealistic when, for example, we offer bets on remote digits of pi or when an agent faces a computationally intractable planning problem. Furthermore, the assumption of logical omniscience creates contradictions in cases where the environment can contain descriptions of the agent itself. Importantly, strategic interactions as studied in game theory are decision problems in which a rational agent is predicted by its environment (the other players). In this paper, we develop a theory of rational decision making that does not assume logical omniscience. We consider agents who repeatedly face decision problems (including ones like betting on digits of pi or games against other agents). The main contribution of this paper is to provide a sensible theory of rationality for such agents. Roughly, we require that a boundedly rational inductive agent tests each efficiently computable hypothesis infinitely often and follows those hypotheses that keep their promises of high rewards. We then prove that agents that are rational in this sense have other desirable properties. For example, they learn to value random and pseudo-random lotteries at their expected reward. Finally, we consider strategic interactions between different agents and prove a folk theorem for what strategies bounded rational inductive agents can converge to.
... In this paper, we develop a normative theory of rational learning in this setting. The standard theory for rational decision making under uncertainty is Bayesian decision theory (BDT) ( [24,12]; for contemporary overviews, see [19,29]). The main ideas of this paper are motivated by a specific shortcoming of BDT: the assumption that the agent who is subject to BDT's recommendations is logically omniscient and in particular not limited by any computational constraints. ...
... Conformity assessment involves, therefore, a decisionmaking process affected by uncertainty. Such a problem has been widely covered in the literature [4]- [6], mostly by taking epistemic uncertainty into account [7]. However when the input elements to a decision-making process are measurement results, uncertainty takes a well-defined meaning, defined by the VIM [1] and the GUM [2], and such a definition and the related evaluation methods cannot be disregarded when evaluating the risk of wrong conformity assessment, as clearly shown in [8]- [13]. ...
Article
Full-text available
Measurement uncertainty plays a very important role ensuring validity of decision-making procedures, since it is the main source of incorrect decisions in conformity assessment. The guidelines given by the actual Standards allow one to take a decision of conformity or non-conformity, according to the given limit and measurement uncertainty associated to the measured value. Due to measurement uncertainty, a risk of a wrong decision is always present, and the Standards also give indications on how to evaluate this risk, although they mostly refer to a normal probability density function to represent the distribution of values that can be reasonably attributed to the measurand. Since such a function is not always the one that best represents this distribution of values, this paper considers some of the most-often used probability density functions and derives simple formulas to set the acceptance (or rejection) limits in such a way that a pre-defined maximum admissible risk is not exceeded.
... [35]. Fig. 4 also shows that the more complex a project is, the greater the need to rely on heuristics [35,37]. In space mission design, a combination of standards and models is usually used at component and systems level, and participative methods at the system architecture level or in early phase design. ...
Article
New space mission profiles arise from the convergence of reduced launch prices, miniaturization of electronics and the maturation of on-orbit servicing technologies. Additionally, a shift towards challenging Lunar objectives pushes for interconnected and complex-mode missions, and trends towards low-cost, sustainability and autonomy also encourage complex modes. The field of space logistics aims to develop conceptual and computational tools to optimize multi-node material-flow networks. However, existing tools insufficiently address the challenge of generating and comparing mission concepts methodically. Through a comprehensive literature review and a case study, we: (1) establish a need for a design support tool for early-phase complex space mission concept generation; (2) establish the potential of a novel approach using pattern languages; (3) propose and describe a concept for an innovative interdisciplinary tool using the identified methods; (4) verify the proposed support tool concept by replicating the Apollo study results as a proof of concept. We created a pattern language in which words are mission concept of operations elements, the grammar is an assembly algorithm allowing to generate options automatically, and the writing system is intuitively readable diagrams illustrating candidate mission modes. Key figures of merit are also automatically compiled. Among the concepts we generated for the Apollo proof of concept study, the four principal candidates considered in 1962 were captured and ranked in the same order as historically and in recent literature. Findings are consistent with Apollo’s choice of designing two modules to be separated in Lunar orbit, thus confirming the proposed tool’s potential. Decision-makers could be supported in the early stages of the mission design process, where most of the value is locked in, by complementing existing methods such as concurrent engineering. Efficiently designing complex-mode missions is key to enabling cost-effective and highly ambitious missions, therefore supporting cislunar economy development.
... These include probabilistic reasoning, decision networks, utility functions and Markov decision processes, amongst many others. I refer the reader to the works of Peterson (2009), Russell andNorvig (2009), andConitzer et al. (2017) for a more detailed discussion of the topic. ...
Article
Full-text available
As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated into society in a manner that would maximise the good for as many people as possible, whilst minimising the bad. One of the most urgent societal expectations of artificial agents is the need for them to behave in a manner that is morally relevant, i.e. to become artificial moral agents (AMAs). In this article, I will argue that exemplarism, an ethical theory based on virtue ethics, can be employed in the building of computationally rational AMAs with weak machine ethics. I further argue that three features of exemplarism, namely grounding in moral exemplars, meeting community expectations and practical simplicity, are crucial to its uniqueness and suitability for application in building AMAs that fit the ethos of AI4SG.
Article
Full-text available
In some cases of higher-order defeat, you rationally doubt whether your credence in p is rational without having evidence of how to improve your credence in p. According to the resilience framework proposed by Steglich-Petersen (Higher-order defeat and Doxastic Resilience), such cases require loss of doxastic resilience: retain your credence level but become more disposed to change your mind given future evidence. Henderson (Higher-Order Evidence and Losing One’s Conviction) responds that this allows for irrational decision-making and that we are better off understanding higher-order defeat in terms of imprecise probabilities. We argue first that Henderson’s imprecise probability framework models the wrong kind of thing. Credal imprecision is neither necessary nor sufficient for higher-order doubt. Second, we offer two ways of understanding the practical import of higher-order defeat given loss of doxastic resilience.
Article
Full-text available
This article explores the concept of reality and the transformation concerning the complex approach to the modes of existence based on the interrelation between diverse actants that make up our world. Considering recent ontological debates and critiques of modernity, the article argues for a shift away from ready-made suppositions about reality and the desire for simplified answers. We propose a radical idea of an actant interaction perspective grounded in Bruno Latour’s and Hartmut Rosa’s ideas of exploring an ontology embracing curiosity, imagination, and the importance of making mistakes as necessary attitudes in navigating the uncontrollable nature of reality. The article emphasizes the importance of embracing a sense of liberty when comprehending and interacting with the world. It encourages us to concentrate on the strengths and connections of living organisms.
Article
In situations where we ignore everything but the space of possibilities, we ought to suspend judgment—that is, remain agnostic—about which of these possibilities is the case. This means that we cannot sum our degrees of belief in different possibilities, something that has been formalised as an axiom of non-additivity. Consistent with this way of representing our ignorance, I defend a doxastic norm that recommends that we should nevertheless follow a certain additivity of possibilities: even if we cannot sum degrees of belief in different possibilities, we should be more confident in larger groups of possibilities. It is thus shown that, in the type of situation considered (in so-called “classical ignorance”, i.e. “behind a thin veil of ignorance”), it is epistemically rational for advocates of suspending judgment to endorse this comparative confidence, while on the other hand it is shown that, even in classical ignorance, no stronger belief—such as a precise uniform probability distribution—is warranted.
Conference Paper
Optimising queries in real-world situations under imperfect conditions is still a problem that has not been fully solved. We consider finding the optimal order in which to execute a given set of selection operators under partial ignorance of their selectivities. The selectivities are modelled as intervals rather than exact values and we apply various methods from decision theory in order to measure optimality.
Article
Full-text available
Decision problems in physics have been an active field of research for quite a few decades resulting in some interesting findings in recent years. However, such research investigations are based on a priori knowledge of theoretical computer science and the technical jargon of set theory. Here, I discuss a particular, but a significant, instance of how decision problems in physics can be realised without such specific prerequisites. I expose a hitherto unnoticed contradiction, that can be posed as as decision problem, concerning the oil drop experiment, thereby resolve it by refining the notion of ‘existence’ in physics. This consequently leads to the undecidability of the charge spectral gap through the notion of ‘undecidable charges’ which is in tandem with the completeness condition of a theory as was stated by Einstein, Podolsky and Rosen in their seminal work. Decision problems can now be realised in connection with basic physics, in general, rather than quantum physics, in particular, as per some recent claims.
Article
Full-text available
In this paper, I argue that art can help us imagine what it would be like to have experiences we have never had before. I begin by surveying a few of the things we are after when we ask what an experience is like. I maintain that it is easy for art to provide some of them. For example, it can relay facts about what the experience involves or what responses the experience might engender. The tricky case is the phenomenal quality of the experience or what it feels like from the inside. Thus, in the main part of the paper, I discuss how art can provide us with this as well. I conclude by situating my view in the context of the broader debate over transformative experiences. I maintain that art can solve some but not all of the problems that arise when deciding whether to undergo a transformation.
Article
Background Depot buprenorphine can potentially address many limitations of other forms of opioid replacement therapy (ORT). This paper builds upon the concept of the ‘informed patient’ to explore individuals’ decisions to initiate injectable depot buprenorphine. Methods Data derive from a qualitative study of 26 people with opioid use disorder who were recruited from drug treatment services in England and Wales and interviewed within 72 hours of starting injectable depot buprenorphine treatment. Interviews were conducted by telephone, audio-recorded, transcribed verbatim, and analysed via Iterative Categorization. Findings Participants’ decisions to initiate treatment were underpinned by receiving sufficient information to trust depot buprenorphine; current treatment not meeting their personal needs or goals; frequently uncritical perceptions of depot buprenorphine; and restricted access to depot buprenorphine making recipients feel grateful. Overall, participants said they had enough information and knowledge to decide they wanted depot buprenorphine. However, dissatisfaction with current ORT, desire for better treatment, and depot buprenorphine’s limited availability seemed to hinder informed decision-making. Conclusions Whilst pharmaceutical products cannot solve the complex life problems often associated with opioid use disorder, we need to increase access to all ORT forms so that patients do not feel they have to rush into any medication without adequate preparation.
Book
Classical decision theory evaluates entire worlds, specified so as to include everything a decision-maker cares about. Thus applying decision theory requires performing computations far beyond an ordinary decision-maker's ability. In this book Paul Weirich explains how individuals can simplify and streamline their choices. He shows how different 'parts' of options (intrinsic, temporal, spatiotemporal, causal) are separable, so that we can know what difference one part makes to the value of an option, regardless of what happens in the other parts. He suggests that the primary value of options is found in basic intrinsic attitudes towards outcomes: desires, aversions, or indifferences. And using these two facts he argues that we need only compare small parts of the options we face in order to make a rational decision. This important book will interest readers in decision theory, economics, and the behavioral sciences.
Chapter
We have already seen, in Chap. 2, that models play an important role in the advancement of knowledge. Nowadays, every scientific theory and every technical activity are based on models that describe them in logical and mathematical terms.
Article
Full-text available
A set of players delegate playing a game to a set of representatives, one for each player. We imagine that each player trusts their respective representative’s strategic abilities. Thus, we might imagine that per default, the original players would simply instruct the representatives to play the original game as best as they can. In this paper, we ask: are there safe Pareto improvements on this default way of giving instructions? That is, we imagine that the original players can coordinate to tell their representatives to only consider some subset of the available strategies and to assign utilities to outcomes differently than the original players. Then can the original players do this in such a way that the payoff is guaranteed to be weakly higher than under the default instructions for all the original players? In particular, can they Pareto-improve without probabilistic assumptions about how the representatives play games? In this paper, we give some examples of safe Pareto improvements. We prove that the notion of safe Pareto improvements is closely related to a notion of outcome correspondence between games. We also show that under some specific assumptions about how the representatives play games, finding safe Pareto improvements is NP-complete.
Article
Complex statistical methods are continuously developed across the fields of ecology, evolution, and systematics (EES). These fields, however, lack standardized principles for evaluating methods, which has led to high variability in the rigor with which methods are tested, a lack of clarity regarding their limitations, and the potential for misapplication. In this review, we illustrate the common pitfalls of method evaluations in EES, the advantages of testing methods with simulated data, and best practices for method evaluations. We highlight the difference between method evaluation and validation and review how simulations, when appropriately designed, can refine the domain in which a method can be reliably applied. We also discuss the strengths and limitations of different evaluation metrics. The potential for misapplication of methods would be greatly reduced if funding agencies, reviewers, and journals required principled method evaluation. Expected final online publication date for the Annual Review of Ecology, Evolution, and Systematics, Volume 53 is November 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Collusive bidding has been a major concern for both antitrust authorities and clients because it can inflate winning prices to artificially high levels. Deciding to initiate such an illegal business competition is subject to external environmental factors (EEFs). Understanding the EEFs' impacts on collusive bidding decisions allows institutional arrangements to encapsulate free and fair competition. Although previous studies have examined the EEFs separately, few have explored their synthesis in determining collusive bidding decisions. This study decomposed the EEFs into three categories: economic, industrial, and geographical (EIG), and detected their effects on collusive bidding decisions by using 254 collusive bidding cases gathered from the Chinese construction industry. Results of the data analysis indicated that as a key EEF, industrial competition has a positive effect on bidders' collusive willingness and collusive team number. However, its impacts on collusive bid prices are negative. In addition, the coupling of economic development and industrial competition positively affects bidders' collusive prices. These findings provide new insight into the essence of the EEFs and support the formulation of countermeasures to deter bidders from seeking a to-collude decision.
Article
Purpose This paper aims to examine the implications of applying Herbert Simon’s bounded rationality to records and information management (RIM) and the possibility that a risk/reward heuristic may be part of the disposition decision-making cognitive process. This in turn may improve the understanding of disposition and its suboptimal results and offer alternatives in understanding why certain behaviors exist around keeping information beyond its retention; and to possibly alter this behavior. This in turn may improve the application of information life-cycle policies through the development of new decision-making heuristics for information retention. Design/methodology/approach This paper examines disposition of information and how the addition of bounded rationality may improve the understanding of why disposition has not been as successful as it might be. Findings This paper concludes that bounded rationality could elevate the RIM function and alter how RIM practitioners within the private sector understand how appraisal and therefore disposition of information occurs. Further, the inclusion of bounded rationality into disposition decision-making may create new roles for practitioners and extend the influence and reach of RIM. Future developments must be watched and analyzed to see if this approach becomes the norm. Practical implications This paper will be of interest to stakeholders responsible for valuing information, appraisal/disposition of information, life-cycle management, records management, information management and big data analytics. The work is original, but parts of this subject were previously addressed in another study. Originality/value Parts of this work were part of a PhD study by this author.
Chapter
In this chapter, the fundamentals of decisions are presented. Basic terminology (alternatives, states, events, outcomes, etc.) is introduced and the different typologies of decisions are analyzed. After a very brief review of the history of Decision Theory, the decision process is analyzed, from the point of view of several authors. Finally, it is shown that all methodologies can be aligned with Herbert A. Simon’s model of decision process. Based on that model, the three major steps in decision management are studied: intelligence, design, and choice. Afterwards, the fundamental choice step is analyzed according to the different possible scenarios: single decision scenario and multiple decision scenario. In the single decision scenario, the different situations are described: decision-making under certainty, decision-making under risk, decision-making under uncertainty, and decision-making under hybrid scenarios. In the multiple decision scenario, two formalisms of decision modelling are explained: decision trees and influence diagrams. Finally, group decision-making is described, and general procedures are outlined.
Article
Does ‘remembering that p’ entail ‘knowing that p’? The widely-accepted epistemic theory of memory (hereafter, ETM) answers affirmatively. This paper purports to reveal the tension between ETM and the prevailing anti-luck epistemology. Central to my argument is the fact that we often ‘vaguely remember’ a fact, of which one plausible interpretation is that our true memory-based beliefs formed in this way could easily have been false. Drawing on prominent theories of misremembering in philosophy of psychology (e.g. fuzzy-trace theory and simulationism), I will construct cases where the subject vaguely remembers that p while fails to meet the safety condition, which imply either that ETM is false or that safety is unnecessary for knowledge. The conclusion reached in this paper will be a conditional: if veritic epistemic luck is incompatible with knowledge, then ‘remembering that p’ does not entail ‘knowing that p’.
Chapter
To chapter concerns emerging cybernetics, which is the school of “meaning to lead” and is particularly associated with the idea of dominations and controls. This chapter initially anatomizes the sociology of software cybernetics into two broad movements—free/libre and open source software (FLOSS) and proprietary close source software (PCSS)—to argue a good software governance approach. This chapter discusses (a) in what matters and (b) for what reasons software governance of Turkey has locked into the ecosystems of PCSS and, in particular, considers causes, effects, and potential outcomes of not utilizing FLOSS in the state of Turkey. The government has continuously stated that there are no compulsory national or international conventions(s) and settlement(s) with the ecosystems of PCSS and that there is no vendor lock-in concern. Nevertheless, the chapter principally argues that Turkey has taken a pragmatic decision-making process of software in the emerging cybernetics that leads and contributes to techno-social externality of PCSS hegemonic stability.
Article
Women's lower career advancement relative to men is sometimes explained by internal factors such as women's lower willingness to make sacrifices for their career, and sometimes by external barriers such as discrimination. In the current research, positing a dynamic interplay between internal and external factors, we empirically test how external workplace barriers guide individuals' internal decisions to make sacrifices for the advancement of their careers. In two high-powered studies in traditionally male-dominated fields (surgery, N = 1,080; veterinary medicine, N = 1,385), women indicated less willingness than men to make sacrifices for their career. Results of structural equation modeling demonstrated that this difference was explained by women's more frequent experience of gender discrimination and lower perceptible fit with people higher up the professional ladder. These barriers predicted reduced expectations of success in their field (Study 1) and expected success of their sacrifices (Study 2), which in turn predicted lower willingness to make sacrifices. The results explain how external barriers play a role in internal career decision making. Importantly, our findings show that these decision-making processes are similar for men and women, yet, the circumstances under which these decisions are made are gendered. That is, both men and women weigh the odds in deciding whether to sacrifice for their career, but structural conditions may influence these perceived odds in a way that favors men. Overall, this advances our understanding of gender differences, workplace inequalities, and research on the role of “choice” and/or structural discrimination behind such inequalities.
Chapter
Uncertainty is a key feature of the world; when we consider actions to take, we confront uncertainty, but also issues of causality. We begin with an analytical framework that builds on probability both to express uncertainty as well as causality. We then consider uncertainty and normative decision frameworks. We then see how causal inference informs us about heart disease and climate change. We end with a brief consideration of beliefs, communication and literacy.
Article
Background Young adults with long‐term conditions can struggle to accept their diagnosis and can become overwhelmed with managing their condition. Suboptimal transfer from paediatric to adult services with a resultant disengagement with the service can result in less involvement in care and decision‐making. Shared decision‐making can improve involvement in health decisions and increase satisfaction with treatment/therapy and care. Objectives An integrative literature review was conducted to explore and understand young adults’ experiences of decision‐making in health care. Design An integrative literature review. Data Sources CINAHL, EMCARE, PsycINFO, HMIC, EMBASE, Web of Science, PubMed, MEDLINE, EBSCOHOST and COCHRANE databases were searched for relevant literature published between January 1999 and January 2020. Findings Thirteen primary research papers met the inclusion criteria. Four main themes were identified: (1) Information delivery and communication; (2) participation in decision‐making; (3) social factors influencing decision‐making and (4) emotional impact of decision‐making. Conclusions Young adults with long‐term conditions have specific decision‐making needs which can impact their emotional health. Research with a specific focus on young adults’ experiences of decision‐making in health care is needed.
Chapter
This chapter introduces decision models. Firstly, a brief review of the fundamentals of decision theory is presented. Secondly, we describe decision trees and their evaluation strategy. Thirdly, influence diagrams are introduced, including three alternative evaluation strategies: transformation to a decision tree, variable elimination and transformation to a Bayesian network. The chapter concludes with two application examples: a decision support system for lung cancer and a decision model that acts as a caregiver to guide an elderly or handicapped person in cleaning her hands.
Article
Full-text available
We study correspondences that choose an interval of alternatives when agents have single-peaked preferences over locations and ordinally extend their preferences over intervals. We extend the main results of Moulin (Public Choice 35:437–455, 1980) to our setting and show that the results of Ching (Soc Choice Welf 26:473–490, 1997) cannot always be similarly extended. First, strategy-proofness and peaks-onliness characterize the class of generalized median correspondences (Theorem 1). Second, this result neither holds on the domain of symmetric and single-peaked preferences, nor can in this result min/max continuity substitute peaks-onliness (see counter-Example 3). Third, strategy-proofness and voter-sovereignty characterize the class of efficient generalized median correspondences (Theorem 2).
Article
Good’s theorem is the apparent platitude that it is always rational to ‘look before you leap’: to gather (reliable) information before making a decision when doing so is free. We argue that Good’s theorem is not platitudinous and may be false. And we argue that the correct advice is rather to ‘make your act depend on the answer to a question’. Looking before you leap is rational when, but only when, it is a way to do this. • 1Introduction • 2Good’s Theorem • 3Inexact Observation • 4Independence • 5Conditionality • 5.1The principle • 5.2Revisiting the counterexamples • 6Conclusion
Chapter
To chapter concerns emerging cybernetics, which is the school of “meaning to lead” and is particularly associated with the idea of dominations and controls. This chapter initially anatomizes the sociology of software cybernetics into two broad movements—free/libre and open source software (FLOSS) and proprietary close source software (PCSS)—to argue a good software governance approach. This chapter discusses (a) in what matters and (b) for what reasons software governance of Turkey has locked into the ecosystems of PCSS and, in particular, considers causes, effects, and potential outcomes of not utilizing FLOSS in the state of Turkey. The government has continuously stated that there are no compulsory national or international conventions(s) and settlement(s) with the ecosystems of PCSS and that there is no vendor lock-in concern. Nevertheless, the chapter principally argues that Turkey has taken a pragmatic decision-making process of software in the emerging cybernetics that leads and contributes to techno-social externality of PCSS hegemonic stability.
Article
Full-text available
The Bhagavadgītā, part of the sixth book of the Hindu epic The Mahābhārata, offers a practical approach to mokṣa, or liberation, and freedom from saṃsāra, or the cycle of death and rebirth. According to the approach, known as karmayoga (‘the yoga of action’), salvation results from attention to duty and the recognition of past acts that inform the present and will direct the future. In the Bhagavadgītā, Kṛṣṇa advocates selfless action as the ideal path to realizing the truth about oneself as well as the ultimate reality. Kṛṣṇa proclaims that humans have rights only to actions and not to their results, whether good or bad (2.47). Therefore, humans should not desire any results whatsoever. The prisoner’s dilemma is a fictional story that shows why individuals who seek only their personal benefit meet worse outcomes than those possible by cooperating with others. The dilemma provides an effective, albeit often overlooked, method for studying the Hindu principle of niṣkāmakarma (‘desireless action’) that is arguably the central teaching of the Bhagavadgītā. In the context of the prisoner’s dilemma, a prisoner who wants to uphold niṣkāmakarma may choose one of two decision-making strategies: to be indifferent and leave the decision to chance or to either pursue the common good or the other person’s benefit instead of his or her own. Assuming that followers of niṣkāmakarma can be goal-oriented, the second strategy is more appropriate than the first, as long as one pursues unselfish goals and remains both indifferent and uncommitted to personal benefit.
Article
Full-text available
We consider the problem of choosing a set of locations of a public good on the real line R{\mathbb {R}} when agents have single-peaked preferences over points. We ordinally extend preferences over compact subsets of R{\mathbb {R}}, and extend the results of Ching and Thomson (1996), Vohra (1999), and Klaus (2001) to choice correspondences. We show that efficiency and replacement-dominance characterize the class of target point functions (Corollary 2) while efficiency and population-monotonicity characterize the class of target set correspondences (Theorem 1).
Article
Full-text available
In this work, we present a model to assist in the equipment replacement decision process in the context of the Brazilian forestry sector using detailed equipment maintenance schedules. The approaches are based on the economic life (EL) method for three scenarios. In the first scenario, we considered buying a new machine during the first period. For the second scenario, we considered keeping the current machine for 24 months, and then buying a new one while the third scenario, we considered selling the current machine, leasing a new one for 24 months and then buying a new machine. Our example draws from five tree harvesting machines that cut, delimb, and processes the tree stem into logs in forest plantations of a large integrated Brazilian forestry company. The algorithms were coded in Visual Basic for Applications (VBA). The results show the new machine discounted cost (CNM) and rebuild machine discounted cost (CRM) scenarios presented higher values than the leased machine discounted cost (CRE) scenario, and those solutions should be rejected. The economic life policy proved optimum with 52 months for a new machine, while the best decision to make is, sell the current machine, lease a new one for 24 months and then buy a new one. The solution and the algorithms used are described in the work.
Article
The article treats law & economics as a proposal of a theory of decision making in legal settings. It is emphasized that the distinction between two approaches in economic analysis of law: the neoclassical and the behavioral one, is made with reference to two different theories of decision making applied in the realm of each approach. The neoclassical approach is based on the theory of expected utility, whereas the behavioral one – on prospect theory. According to the scholars on both sides, application of decision theory might be helpful in influencing behavior by legal norms in a more sophisticated way. The claim of the article is that law & economics scholars misinterpret the assumptions and propositions of the theories and/or formulate excessive claims, if they argue that decision theoretical findings provide knowledge about the way in which people’s decisions are influenced by law.
Article
Full-text available
The paper focuses on the issue of compatibility of social institution and convention. At first, it introduces the modest account of conventionality building on five distinctive features – interdependence, arbitrariness, mind-independence, spontaneity, and normative-neutrality – which constitute conventional behaviour, then it presents the two major theories of social institutions that explain them in terms of rules, or equilibria. The argument is that conventions cover a wide-ranging area and cannot be identified with the category of institutions because it would be too restrictive and contradictory to the initial modest account.
Article
This paper proposes new grounds for the legal ambivalence about ‘bad character evidence’. It is suggested that errors based on such evidence are profoundly tragic in the Aristotelian sense: the defendant who previously committed crime is likely to reoffend; nevertheless, she beats the odds and refrains from further crime commission – only to then be falsely convicted based on the very odds she has almost heroically managed to beat. It is further proposed that the tragic nature of such false convictions might make them particularly unfair to the defendant. It is, however, submitted that the likelihood of errors based on such evidence is unknown and probably also unknowable. Accordingly, the maximin rule for decision in conditions of deep ignorance is applied, leading to the conclusion that exclusion is to be preferred.
Article
This article analyses how normative decision theory is understood by economists. The paradigmatic example of normative decision theory, discussed in the article, is the expected utility theory. It has been suggested that the status of the expected utility theory has been ambiguous since early in its history. The theory has been treated as descriptive, normative, or both. This observation is the starting point for the analysis presented here. The text discusses various ways in which economists and philosophers of economics have conceptualized the normative status of the expected utility theory, and it shows that none is satisfactory from the point of view of philosophy of science.
Article
Full-text available
Protein-coding genetic variants that strongly affect disease risk can yield relevant clues to disease pathogenesis. Here we report exome-sequencing analyses of 20,791 individuals with type 2 diabetes (T2D) and 24,440 non-diabetic control participants from 5 ancestries. We identify gene-level associations of rare variants (with minor allele frequencies of less than 0.5%) in 4 genes at exome-wide significance, including a series of more than 30 SLC30A8 alleles that conveys protection against T2D, and in 12 gene sets, including those corresponding to T2D drug targets (P = 6.1 × 10⁻³) and candidate genes from knockout mice (P = 5.2 × 10⁻³). Within our study, the strongest T2D gene-level signals for rare variants explain at most 25% of the heritability of the strongest common single-variant signals, and the gene-level effect sizes of the rare variants that we observed in established T2D drug targets will require 75,000–185,000 sequenced cases to achieve exome-wide significance. We propose a method to interpret these modest rare-variant associations and to incorporate these associations into future target or gene prioritization efforts.
Conference Paper
Purpose – to prepare, disseminate and implement the new concept of economics engineering, the essence of which is an integrated approach to the problems of economic growth, innovation activities, technological progress, and break-throughs. Research methodology – systematic analysis and synthesis of various scientific ideas and approaches, formulation and analysis of new insights. Findings – a new concept of economics engineering is prepared. This concept provides an integrated approach to the so-lution of the problems of economic growth, innovation activities, technological progress, and breakthroughs, as well as of the application of dynamic management tools. The implementation of this concept in the practice of the economic activ-ities and research creates various preconditions for anticipation and realization of new opportunities for economic devel-opment and technological breakthroughs under contemporary conditions of globalization, European integration and the creation of knowledge-based society and knowledge economy. Research limitations – the proposed concept is limited to the cases of the macroeconomic analysis and preparation of the strategic economic decisions. Practical implications – the proposed concept is usable in various cases of economic policy decisions making. Originality/Value – the new insights and perspective ideas provided for the priorities of the economics engineering sci-ence and of the application of the dynamic management tools, are described and analyzed. Orientation to these insights and ideas highlights new significant trends in the scientific research of economic profile
Chapter
Main aim of the paper was to create checkers player move aided system. For this purpose two different approaches were used. First one was well known searching algorithm—Negamax and second one was Reinforcement Learning algorithm—SARSA. For the purpose of making experiments on algorithms’ performance special environment was created. It was checkers program which main goal was to give its user possibility of launching the game between two different agents. One of its constraints was also to make creating and adding new agent easy for the future use in other research.
Article
A common objection to the precautionary principle is that it is irrational. I argue that this objection goes beyond the often-discussed claim that the principle is incoherent. Instead, I argue, expected utility theory is the source of several more sophisticated irrationality charges against the precautionary principle. I then defend the principle from these objections by arguing (i) that the relevant features of the precautionary principle are part of plausible normative theories, and (ii) that the precautionary principle does not diverge more from ideal expected utility maximization than non-ideal expected utility maximizing procedures, and may do better in real-world choices.
Article
Full-text available
In sustainable development (SD), the rationality behind decision-making is non-trivial because it deals with, among others, inter-temporal issues in complex systems. Choosing the rational thing to do and coping with the tradeoffs of short and long-term outcomes are demanding because of the uncertainty regarding future preferences and the consequences of today’s actions. If decisions are poorly made, there could be terrible consequences for humanity. One way to understand the rationality behind the SD decision-making process is to study the motivations to act. However, the motivations to engage in SD are insufficiently discussed in the literature. This paper aims to analyze those motivations in the context of rational decision-making with different time frames by making them explicit. A framework is proposed to identify motivations by systematizing the sample literature on why SD is important, what SD is, and how SD is practiced. The fear of harm from possible overshoot and collapse is proposed as the main driver for the motivations to act. Such fear shapes understandings of the inter-temporal effects of today’s decisions, suggesting the use of different types of rationalities according to the time frame considered.
Article
Full-text available
The precautionary principle (PP) is an influential principle of risk management. It has been widely introduced into environmental legislation, and it plays an important role in most international environmental agreements. Yet, there is little consensus on precisely how to understand and formulate the principle. In this article I prove some impossibility results for two plausible formulations of the PP as a decision‐rule. These results illustrate the difficulty in making the PP consistent with the acceptance of any tradeoffs between catastrophic risks and more ordinary goods. How one interprets these results will, however, depend on one's views and commitments. For instance, those who are convinced that the conditions in the impossibility results are requirements of rationality may see these results as undermining the rationality of the PP. But others may simply take these results to identify a set of purported rationality conditions that defenders of the PP should not accept, or to illustrate types of situations in which the principle should not be applied.
Article
In “Experience and Time,” Brian Garrett poses a challenge to friends of the rationality of pure time preferences. In this discussion note, we accept the challenge and provide two kinds of cases wherein some pure time preferences could be deemed rational.
Article
Full-text available
Several axiom systems for preference among acts lead to a unique probability and a state-independent utility such that acts are ranked according to their expected utilities. These axioms have been used as a foundation for Bayesian decision theory and subjective probability calculus. In this article we note that the uniqueness of the probability is relative to the choice of what counts as a constant outcome. Although it is sometimes clear what should be considered constant, in many cases there are several possible choices. Each choice can lead to a different “unique” probability and utility. By focusing attention on state-dependent utilities, we determine conditions under which a truly unique probability and utility can be determined from an agent's expressed preferences among acts. Suppose that an agent's preference can be represented in terms of a probability P and a utility U. That is, the agent prefers one act to another iff the expected utility of that act is higher than that of the other. There are many other equivalent representations in terms of probabilities Q, which are mutually absolutely continuous with P, and state-dependent utilities V, which differ from U by possibly different positive affine transformations in each state of nature. We describe an example in which there are two different but equivalent state-independent utility representations for the same preference structure. They differ in which acts count as constants. The acts involve receiving different amounts of one or the other of two currencies, and the states are different exchange rates between the currencies. It is easy to see how it would not be possible for constant amounts of both currencies to have simultaneously constant values across the different states. Savage (1954, sec. 5.5) discovered a situation in which two seemingly equivalent preference structures are represented by different pairs of probability and utility. He attributed the phenomenon to the construction of a “small world.” We show that the small world problem is just another example of two different, but equivalent, representations treating different acts as constants. Finally, we prove a theorem (similar to one of Karni 1985) that shows how to elicit a unique state-dependent utility and does not assume that there are prizes with constant value. To do this, we define a new hypothetical kind of act in which both the prize to be awarded and the state of nature are determined by an auxiliary experiment.
Article
Full-text available
In this paper I shall somewhat investigate several formulations of decision theory from a purely quantitative point of view, thus leaving aside the whole question of measurement. Since almost any foundational work on decision theory strives at proving nicer and nicer measurement results and representation theorems, I feel obliged to give a short explanation of my self-imposed limitation. The first and best reason for it is that I have not got anything new to say about measurement, and the second is that one need not say anything: It was hard work to convince economists that cardinalization is possible and meaningful. This was accomplished by proving existence and uniqueness theorems establishing the existence of cardinal functions (e.g. subjective utilities and probabilities) unique up to certain transformations that mirror ordinal concepts (e.g. subjective preferences) in a certain way. And surely, such theorems provide an excellent justification for the use of cardinal concepts. The eagerness in the search for representation theorems, however, is not really understandable but on the supposition that they are the only justification of cardinal concepts, and this assumption is merely a rather dubious conjecture. After all, philosophers of science have been debating about theoretical concepts for at least 40 years, and, though the last word has not yet been spoken, they generally agree that it is possible to have meaningful, yet observationally undefinable theoretical notions.1 And the concepts of subjective probability and utility are theoretical notions of decision theory. Thus if philosophers of science are right, they need not necessarily be proved observationally definable by representation theorems for being meaningful. 2 For that reason I consider quantitative decision models fundamental for decision theory and measurement as part of the confirmation or testing theory of the quantitative models. Of course, the latter is important for evaluating the former, but there may be different (e.g. conceptual) grounds for finding one quantitative decision model more satisfac
Article
Full-text available
A central part of Bayesianism is the doctrine that the decision maker's knowledge in a given situation can be represented by a subjective probability measure defined over the possible states of the world. This measure can be used to determine the expected utility for the agent of the various alternatives open to him. The basic decision rule is then that the alternative which has the maximal expected utility should be chosen. A fundamental assumption for this strict form of Bayesianism is that the decision maker's knowledge can be represented by a unique probability measure. The adherents of this assumption have produced a variety of arguments in favor of it, the most famous being the so-called Dutch book arguments. A consequence of the assumption, in connection with the rule of maximizing expected utility, is that in two decision situations which are identical with respect to the probabilities assigned to the relevant states and the utilities of the various outcomes the decisions should be the same. It seems to us, however, that it is possible to find decision situations which are identical in all the respects relevant to the strict Bayesian, but which nevertheless motivate different decisions. As an example to illustrate this point, consider Miss Julie who is invited to bet on the outcome of three different tennis matches. 1 As regards match A, she is very well-informed about the two players - she knows everything about the results of their earlier matches, she has watched them play several times, she is familiar with their present physical condition and the setting of the match, etc. Given all this information, Miss lulie predicts that it will be a very even match and that a mere chance will determine the winner. In match B, she knows nothing whatsoever about the relative strength of the contestants (she has not even heard their names before) and she has no other information that
Article
Full-text available
Previous claims to have resolved the two-envelope paradox have been premature. The paradoxical argument has been exposed as manifestly fallacious if there is an upper limit to the amount of money that may be put in an envelope; but the paradoxical cases which can be described if this limitation is removed do not involve mathematical error, nor can they be explained away in terms of the strangeness of infinity. Only by taking account of the partial sums of the infinite series of expected gains can the paradox be resolved.
Article
A collection of 13 papers by David Lewis, written on a variety of topics including causation, counterfactuals and indicative conditionals, the direction of time, subjective and objective probability, explanation, perception, free will, and rational decision. The conclusions reached include the claim that time travel is possible, that counterfactual dependence is asymmetrical, that events are properties of spatiotemporal regions, that the Prisoners’ Dilemma is a Newcomb problem, and that causation can be analyzed in terms of counterfactual dependence between events. These papers can be seen as a “prolonged campaign” for a philosophical position Lewis calls “Humean supervenience,” according to which “all there is to the world is a vast mosaic of local matters of particular fact,” with all global features of the world thus supervening on the spatiotemporal arrangement of local qualities.
Article
This paper deals with the two-envelope paradox. Two main formulations of the paradoxical reasoning are distinguished, which differ according to the partition of possibilities employed. We argue that in the first formulation the conditionals required for the utility assignment are problematic; the error is identified as a fallacy of conditional reasoning. We go on to consider the second formulation, where the epistemic status of certain singular propositions becomes relevant; our diagnosis is that the states considered do not exhaust the possibilities. Thus, on our approach to the paradox, the fallacy, in each formulation, is found in the reasoning underlying the relevant utility matrix; in both cases, the paradoxical argument goes astray before one gets to questions of probability or calculations of expected utility.
Article
This paper presents an axiomatization of the principle of maximizing expected utility that does not rely on the independence axiom or sure-thing principle. Perhaps more importantly the new axiomatization is based on an ex ante approach, instead of the standard ex post approach. An ex post approach utilizes the decision maker's preferences among risky acts for generating a utility and a probability function, whereas in the ex ante approach a set of preferences among potential outcomes are on the input side of the theory and the decision maker's preferences among risky acts on the output side.
Article
The notion that probability theory is the theory of chance has an immediate appeal. We may allow that there are other kinds of things to which probability can address itself, things such as degrees of rational belief and degrees of confirmation, to name only two, but if chance forms part of the world, then probability theory ought, it would seem, to be the device to deal with it. Although chance is undeniably a mysterious thing, one promising way to approach it is through the use of propensities-indeterministic dispositions possessed by systems in a particular environment, exemplified perhaps by such quite different phenomena as a radioactive atom's propensity to decay and my neighbor's propensity to shout at his wife on hot summer days. There is no generally accepted account of propensities, but whatever they are, propensities must, it is commonly held, have the properties prescribed by probability theory. My contention is that they do not and, that rather than this being construed as a problem for propensities, it is to be taken as a reason for rejecting the current theory of probability as the correct theory of chance. The first section of the paper will provide an informal version of the argument, indicating how the causal nature of propensities cannot be adequately represented by standard probability theory. In the second section a full version of the argument will be given so that the assumptions underlying the informal account can be precisely identified. The third section examines those assumptions and deals with objections that could be raised against the argument and its conclusion. The fourth and final section draws out some rather more general consequences of accepting the main argument. Those who find the first section sufficiently persuasive by itself may wish to go immediately to the final section, returning thereafter to the second and third sections as necessary. ? Copyright 1985 Paul Humphreys
Book
"This is the classic work upon which modern-day game theory is based. What began more than sixty years ago as a modest proposal that a mathematician and an economist write a short paper together blossomed, in 1944, when Princeton University Press published Theory of Games and Economic Behavior. In it, John von Neumann and Oskar Morgenstern conceived a groundbreaking mathematical theory of economic and social organization, based on a theory of games of strategy. Not only would this revolutionize economics, but the entirely new field of scientific inquiry it yielded--game theory--has since been widely used to analyze a host of real-world phenomena from arms races to optimal policy choices of presidential candidates, from vaccination policy to major league baseball salary negotiations. And it is today established throughout both the social sciences and a wide range of other sciences. This sixtieth anniversary edition includes not only the original text but also an introduction by Harold Kuhn, an afterword by Ariel Rubinstein, and reviews and articles on the book that appeared at the time of its original publication in the New York Times, tthe American Economic Review, and a variety of other publications. Together, these writings provide readers a matchless opportunity to more fully appreciate a work whose influence will yet resound for generations to come.
Article
Allan Gibbard and William Harper (1978) point out that according to the definition of expected utility adopted by causal decision theory there are decisions that are unstable in the sense that the realisation of any option provides evidence that some other option is better. Gibbard and Harper contend that such cases, while bizarre, do not undermine causal decision theory. And Brian Skyrms (1982) agrees with their contention, although he supports it in a different way. On the other hand, Reed Richter (1983) maintains that cases of decision instability call for a revision of causal decision theory. I argue that cases of decision instability do not pose a problem for causal decision theory. However I do not use the arguments advanced by Gibbard and Harper, and Skyrms, since, as I show, they do not provide an adequate defence of causal decision theory. Instead, I construct a new argument based on a temporal analysis of decisions. © 1985, Australasian Association of Philosophy. All rights reserved.
Article
You are given a choice between two envelopes. You are told, reliably, that each envelope has some money in it-some whole number of dollars, say-and that one envelope contains twice as much money as the other. You don't know which has the higher amount and which has the lower. You choose one, but are given the opportunity to switch to the other. Here is an argument that it is rationally preferable to switch: Let x be the quantity of money in your chosen envelope. Then the quantity in the other is either 1/2x or 2x, and these possibilities are equally likely. So the expected utility of switching is 1/2(1/2x) + 1/2(2x) = 1.25x, whereas that for sticking is only x. So it is rationally preferable to switch. There is clearly something wrong with this argument. For one thing, it is obvious that neither choice is rationally preferable to the other: it's a tossup. For another, if you switched on the basis of this reasoning, then the same argument could immediately be given for switching back; and so on, indefinitely. For another, there is a parallel argument for the rational preferability of sticking, in terms of the quantity y in the other envelope. But the problem is to provide an adequate account of how the argument goes wrong. This is the two-envelope paradox. Although there is fairly extensive recent literature on this problem, none of it seems to me to get to the real heart of the matter.' In my view, the flaw in the paradoxical argument is considerably harder to diagnose than is usually believed, and an adequate diagnosis reveals important morals about both the nature of probability and the foundations of decision theory. I will offer my own account, in such a way that the morals of the paradox will unfold first and then will generate the diagnosis of how it goes wrong. Thereafter I will briefly pursue some theoretical issues for decision theory that arise in light of the paradox's lessons.
Article
An agent who violates independence can avoid dynamic inconsistency in sequential choice if he is sophisticated enough to make use of backward induction in planning. However, Seidenfeld has demonstrated that such a sophisticated agent with “dependent” preferences is bound to violate the principle of dynamic substitution, according to which admissibility of a plan is preserved under substitution of indifferent options at various choice nodes in the decision tree. Since Seidenfeld considers dynamic substitution to be a coherence condition on dynamic choice, he concludes that sophistication cannot save a violator of independence from incoherence. In response to McClennen’s objection that relying on dynamic substitution when independence is at stake must be question-begging, Seidenfeld undertakes to prove that dynamic substitution follows from the principle of backward induction alone, provided we assume that the agent’s admissible choices from different sets of feasible plans are all based on a fixed underlying preference ordering of plans. This paper shows that Seidenfeld’s proof fails: depending on the interpretation, it is either invalid or based on an unacceptable assumption.
Article
This paper reviews recently developed theories that generalize the von Neumann-Morgenstern theory of preference under risk and Savage’s theory of preference under uncertainty. The new theories are designed to accommodate systematic and predictable violations of previous theories while not giving up too much of the mathematical elegance of their expected utility representations. The material in the paper is adapted from the author’s book “Nonlinear preference and utility theory” (1988; Zbl 0715.90001).
Article
The first part of this paper reexamines the logical foundations of Bayesian decision theory and argues that the Bayesian criterion of expected-utility maximization is the only decision criterion consistent with rationality. On the other hand, the Bayesian criterion, together with the Pareto optimality requirement, inescapably entails a utilitarian theory of morality. The next sections discuss the role both of cardinal utility and of cardinal interpersonal comparisons of utility in ethics. It is shown that the utilitarian welfare function satisfies all of Arrow's social choice postulates avoiding the celebrated impossibility theorem by making use of information which is unavailable in Arrow's original framework. Finally, rule utilitarianism is contrasted with act utilitarianism and judged to be preferable for the purposes of ethical theory.
Article
A version of Harsanyi's social aggregation theorem is established for state-contingent alternatives when the number of states is finite. The consequences of using utility functions that do not have an expected utility functional form to represent the individual and social preferences are also considered.
Article
It is commonly assumed that moral deliberation requires that the alternatives available in a choice situation are evaluatively comparable. This comparability assumption is threatened by claims of incomparability, which is often established by means of the small improvement argument (SIA). In this paper I argue that SIA does not establish incomparability in a stricter sense. The reason is that it fails to distinguish incomparability from a kind of evaluative indeterminacy which may arise due to the vagueness of the evaluative comparatives ‘better than,’ ‘worse than,’ and ‘equally as good as.’
Article
The paper considers the normative status of the independence and ordering principles of expected utility theory. Preferences are defined in terms of choice and the two principles derived from restrictions on choice in sequential decision problems. The results extend and clarify important contributions by Hammond and McClennen. They show that it is different requirements on dynamic choice which rationalise independence and ordering respectively and illuminate their relationship to consequentialism.
Article
This paper provides new foundations for Bayesian Decision Theory based on a representation theorem for preferences defined on a set of prospects containing both factual and conditional possibilities. This use of a rich set of prospects not only provides a framework within which the main theoretical claims of Savage, Ramsey, Jeffrey and others can be stated and compared, but also allows for the postulation of an extended Bayesian model of rational belief and desire from which they can be derived as special cases. The main theorem of the paper establishes the existence of a such a Bayesian representation of preferences over conditional prospects, i.e. the existence of a pair of real-valued functions respectively measuring the agent’s degrees of belief and desire and which satisfy the postulated rationality conditions on partial belief and desire. The representation of partial belief is shown to be unique and that of partial desire, unique up to a linear transformation.
Article
Although transitivity is often regarded as an indispensable principle of rational choice under uncertainty, some decision models allow nontransitive preferences. One of these--regret theory--is consistent with a particular pattern of choice cycles when payoffs are nonnegative and the opposite pattern of cycles when payoffs are nonpositive. This paper presents evidence from an experiment designed to test these implications of regret theory. Copyright 1992 by Royal Economic Society.