Article

Statistical Decision Functions.

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... An elegant duality result by Wald [1949] and Pearce [1984] characterizes what it means for an action to be rationalized in the static model. 11 The result states that, in a two-player game, ...
... The most closely related papers to ours are the following: At the conceptual core, as pointed out in Section 3, the idea of using duality to characterize the empirical content of the dynamic decision problem builds on the static counterpart pioneered by Wald [1949] and Pearce [1984]. Further, in a static setting, Caplin and Martin [2015] provides a necessary condition for stochastic choices to be rationalized by information in a Bayesian model. ...
... Note that if T = 1, the actions which can be rationalized are precisely those that are a best-response to some belief over states. The theorem then reduces to the celebrated Wald-Pearce Lemma(Wald [1949] andPearce [1984]), which states that the actions that are never a best-response, and hence cannot be rationalized, are strictly dominated by some mixed strategy. Here, our rule would recommend deviating from the dominated action to the dominating mixed strategy and not deviating from the other actions. ...
Preprint
Full-text available
An analyst observes an agent take a sequence of actions. The analyst does not have access to the agent's information and ponders whether the observed actions could be justified through a rational Bayesian model with a known utility function. We show that the observed actions cannot be justified if and only if there is a single deviation argument that leaves the agent better off, regardless of the information. The result is then extended to allow for distributions over possible action sequences. Four applications are presented: monotonicity of rationalization with risk aversion, a potential rejection of the Bayesian model with observable data, feasible outcomes in dynamic information design, and partial identification of preferences without assumptions on information.
... In this section, we will make use of game theory (presented in [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31]) to develop the game theoretical model and Markov chains (presented in [32][33][34][35][36][37][38][39][40][41][42][43]) to estimate the game's probabilities in order to design suitable models for financial data (specifically, for the data that were described in the previous section); then, we will describe how we applied our models using R 4.0.4 software. ...
... To create the game that mimics the financial markets, we need to meet game theory's requirement to have at least two players and that their identities are known; in our case, the players are the speculator and the market. However, the market is an abstract entity; thus, we enter the subclass of games (developed in [19,20,27,30]) called games against nature, where one of the players is an abstract entity. ...
... Note that, with the previous assumptions, we have a game against nature where we assume Wald's (max-min) Criterion (for more details check [19,20,27] and/or [30]). That is, we apply Wald's (max-min) Criterion to model the interaction between the speculator and the market within a game against nature framework, where we assume the market acts adversarially, aiming to minimize the speculator's gains while the speculator aims to maximize their profit under the worst-case scenario of market behavior. ...
Article
Full-text available
We model the financial markets as a game and make predictions using Markov chain estimators. We extract the possible patterns displayed by the financial markets, define a game where one of the players is the speculator, whose strategies depend on his/her risk-to-reward preferences, and the market is the other player, whose strategies are the previously observed patterns. Then, we estimate the market’s mixed probabilities by defining Markov chains and utilizing its transition matrices. Afterwards, we use these probabilities to determine which is the optimal strategy for the speculator. Finally, we apply these models to real-time market data to determine its feasibility. From this, we obtained a model for the financial markets that has a good performance in terms of accuracy and profitability.
... The basic structure of our framework dates back to Abraham Wald's maximin model [19] and Leonard Savage's minimax regret model [17], both of which are based on worst-case principles. Wald's maximin criterion prescribes choosing the strategy that maximizes the minimum payoff, while Savage's criterion prescribes choosing the strategy that minimizes the maximum regret. ...
... As mentioned above, the motivation to find an upper bound on the regret comes from Savage's 1951 work [17] in decision theory, which develops the minimax regret criterion designed to minimize the worst-case regret. Similar to Wald's maximin model [19], this has often been utilized to model choices under uncertainty, however, it assumes knowledge of the underlying distribution. Ismail [9] proposes the optimin criterion which coincides with Wald's maximin criterion in zero-sum games. ...
Preprint
Full-text available
In this paper, we propose a probabilistic game-theoretic model to study the properties of the worst-case regret of the greedy strategy under complete (Knightian) uncertainty. In a game between a decision-maker (DM) and an adversarial agent (Nature), the DM observes a realization of product ratings for each product. Upon observation, the DM chooses a strategy, which is a function from the set of observations to the set of products. We study the theoretical properties, including the worst-case regret of the greedy strategy that chooses the product with the highest observed average rating. We prove that, with respect to the worst-case regret, the greedy strategy is optimal and that, in the limit, the regret of the greedy strategy converges to zero. We validate the model on data collected from Google reviews for restaurants, showing that the greedy strategy not only performs according to the theoretical findings but also outperforms the uniform strategy and the Thompson Sampling algorithm.
... We propose a decision rule in the style of Wald (1949) that estimates the joint distribution of potential outcomes in the sample as the maximizer of this likelihood. There are a number of benefits to the statistical decision theory framework in our setting. ...
... In the previous section, we presented a design-based model of a random experiment that preserves curvature in the likelihood with respect to the joint distribution of potential outcomes, even when holding constant the marginal distributions. We turn now to the broad setting of statistical decision theory in the style of Wald (1949) to determine the best ways to exploit this novel information. Suppose a decision maker wishes to guess the joint distribution of potential outcomes in the sample. ...
Preprint
Full-text available
We present a design-based model of a randomized experiment in which the observed outcomes are informative about the joint distribution of potential outcomes within the experimental sample. We derive a likelihood function that maintains curvature with respect to the joint distribution of potential outcomes, even when holding the marginal distributions of potential outcomes constant -- curvature that is not maintained in a sampling-based likelihood that imposes a large sample assumption. Our proposed decision rule guesses the joint distribution of potential outcomes in the sample as the distribution that maximizes the likelihood. We show that this decision rule is Bayes optimal under a uniform prior. Our optimal decision rule differs from and significantly outperforms a ``monotonicity'' decision rule that assumes no defiers or no compliers. In sample sizes ranging from 2 to 40, we show that the Bayes expected utility of the optimal rule increases relative to the monotonicity rule as the sample size increases. In two experiments in health care, we show that the joint distribution of potential outcomes that maximizes the likelihood need not include compliers even when the average outcome in the intervention group exceeds the average outcome in the control group, and that the maximizer of the likelihood may include both compliers and defiers, even when the average intervention effect is large and statistically significant.
... The optimal solution of that problem of testing the simple hypothesis H 0 against the simple alternative H 1 (Neyman-Pierson criteria) [1,2] has the form y ∈ A(A, σ) ⇒ H 0 , y ∈ A(A, σ) ⇒ H 1 , ...
... Without loss of generality we assume the set E closed and Lebeques measurable on R n . Formally speaking, the optimal solution of the problem (10) of minimax testing of hypotheses H 0 and H 1 is described in Wald's general theory of statistical decisions [1]. For that solution we need to find the "least favorable" prior distribution π lf (dE) on E, replace the composite hypothesis H 1 by simple hypothesis H 1 (π lf ), and then to investigate characteristics of corresponding Neyman-Pierson criteria for testing simple hypotheses H 0 and H 1 (π lf ). ...
Preprint
The problem of minimax detection of Gaussian random signal vector in White Gaussian additive noise is considered. It is supposed that an unknown vector σ\boldsymbol{\sigma} of the signal vector intensities belong to the given set E{\mathcal E}. It is investigated when it is possible to replace the set E{\mathcal E} by a smaller set E0{\mathcal E}_{0} without loss of quality (and, in particular, to replace it by a single point σ0\boldsymbol{\sigma}_{0}).
... With complex origins, NHST was first introduced as a significance test by William Sealy Gosset [7] and Ronald Fisher [8,9]. This is followed by Jerzy Neyman [10], Egon Pearson, and Abraham Wald [11], who introduced tests of acceptance, incorporating concepts of alpha and beta error and decision functions. The NHST paradigm currently represents a subsequent unstandardized combination of these approaches [12][13][14][15][16]. Hence, whilst NHST is commonly encountered in clinical research, particularly in trials comparing interventions between two patient populations, it is not always considered part of method comparison studies in clinical laboratories. ...
Article
Full-text available
Amongst the main perspectives when evaluating the results of medical studies are statistical significance (following formal statistical testing) and clinical significance. While statistical significance shows that a factor’s observed effect on the study results is unlikely (for a given alpha) to be due to chance, effect size shows that the factor’s effect is substantial enough to be clinically useful. The essence of statistical significance is “negative” - that the effect of a factor under study probably did not happen by chance. In contrast, effect size and clinical significance evaluate whether a clinically “positive” effect of a factor is effective and cost-effective. Medical diagnoses and treatments should never be based on the results of a single study. Results from numerous well-designed studies performed in different circumstances are needed, focusing on the magnitude of the effects observed and their relevance to the medical matters being studied rather than on the p-values. This paper discusses statistical inference and its relevance to clinical importance of quantitative testing in clinical laboratories. To achieve this, we first pose questions focusing on fundamental statistical concepts and their relationship to clinical significance. The paper also aims to provide examples of using the methodological approaches of superiority, equivalence, non-inferiority, and inferiority studies in clinical laboratories, which can be used in evidence-based decision-making processes for laboratory professionals.
... Robust MDPs are a framework for the formal analysis of Markov decision processes in which the decision maker (DM) is unsure of the transition function, see [19,9,25]. In line with a tradition that dates back to Wald [23], this uncertainty is modeled by assuming that 'nature' reacts adversarially to the DM's strategy by choosing a transition function that minimizes the DM's payoff. As such, it is clear that robust MDPs and zero-sum stochastic games are strongly related, even if there is no one-to-one mapping between the questions addressed in the two communities. ...
Preprint
Full-text available
This paper investigates properties of Blackwell ϵ\epsilon-optimal strategies in zero-sum stochastic games when the adversary is restricted to stationary strategies, motivated by applications to robust Markov decision processes. For a class of absorbing games, we show that Markovian Blackwell ϵ\epsilon-optimal strategies may fail to exist, yet we prove the existence of Blackwell ϵ\epsilon-optimal strategies that can be implemented by a two-state automaton whose internal transitions are independent of actions. For more general absorbing games, however, there need not exist Blackwell ϵ\epsilon-optimal strategies that are independent of the adversary's decisions. Our findings point to a contrast between absorbing games and generalized Big Match games, and provide new insights into the properties of optimal policies for robust Markov decision processes.
... Statistical decision theory [Wald, 1949, Savage, 1972] provides a framework for evaluating decisions that accounts for both of these considerations. Below we define a decision problem and show how a definition of normative decision-making can be used to assess the value of predictive uncertainty information. ...
Preprint
Full-text available
Methods to quantify uncertainty in predictions from arbitrary models are in demand in high-stakes domains like medicine and finance. Conformal prediction has emerged as a popular method for producing a set of predictions with specified average coverage, in place of a single prediction and confidence value. However, the value of conformal prediction sets to assist human decisions remains elusive due to the murky relationship between coverage guarantees and decision makers' goals and strategies. How should we think about conformal prediction sets as a form of decision support? Under what conditions do we expect the support they provide to be superior versus inferior to that of alternative presentations of predictive uncertainty? We outline a decision theoretic framework for evaluating predictive uncertainty as informative signals, then contrast what can be said within this framework about idealized use of calibrated probabilities versus conformal prediction sets. Informed by prior empirical results and theories of human decisions under uncertainty, we formalize a set of possible strategies by which a decision maker might use a prediction set. We identify ways in which conformal prediction sets and posthoc predictive uncertainty quantification more broadly are in tension with common goals and needs in human-AI decision making. We give recommendations for future research in predictive uncertainty quantification to support human decision makers.
... A growing literature therefore focuses on finding minimax regret rules, see Wald (1950), Savage (1954), and Manski (2004), that is, rules that minimize over all treatment policies the maximal regret over all DGPs, where regret for a given policy and DGP measures the gap between the best possible expected outcome and the expected outcome obtained for the chosen policy. Unfortunately, there are very few examples where minimax regret rules are analytically known and therefore in many examples of empirical interest they cannot currently be used by policymakers. ...
Preprint
Finding numerical approximations to minimax regret treatment rules is of key interest. To do so when potential outcomes are in {0,1} we discretize the action space of nature and apply a variant of Robinson's (1951) algorithm for iterative solutions for finite two-person zero sum games. Our approach avoids the need to evaluate regret of each treatment rule in each iteration. When potential outcomes are in [0,1] we apply the so-called coarsening approach. We consider a policymaker choosing between two treatments after observing data with unequal sample sizes per treatment and the case of testing several innovations against the status quo.
... It's easy to notice in the preceding analysis that there's no restriction on P t (A ⋆ = a) to be derived using a Bayes-rule based posterior-distributions of arm-rewards,P t (R t,a ) as is done in parametric Thompson sampling. This choice is rather implicit, given the decision theoretic and information theoretic coherency of Bayesian framework (Wald, 1961;Zellner, 1988). However, Bayesianframework is not limited to Bayes-rule based derivation of posterior distributions. ...
Preprint
Full-text available
We introduce Dirichlet Process Posterior Sampling (DPPS), a Bayesian non-parametric algorithm for multi-arm bandits based on Dirichlet Process (DP) priors. Like Thompson-sampling, DPPS is a probability-matching algorithm, i.e., it plays an arm based on its posterior-probability of being optimal. Instead of assuming a parametric class for the reward generating distribution of each arm, and then putting a prior on the parameters, in DPPS the reward generating distribution is directly modeled using DP priors. DPPS provides a principled approach to incorporate prior belief about the bandit environment, and in the noninformative limit of the DP posteriors (i.e. Bayesian Bootstrap), we recover Non Parametric Thompson Sampling (NPTS), a popular non-parametric bandit algorithm, as a special case of DPPS. We employ stick-breaking representation of the DP priors, and show excellent empirical performance of DPPS in challenging synthetic and real world bandit environments. Finally, using an information-theoretic analysis, we show non-asymptotic optimality of DPPS in the Bayesian regret setup.
... I depart from this literature by carefully thinking about how to set up a tractable decision problem under statistical uncertainty that can be solved with sample estimates. This paper also builds on a broad literature in statistical decision theory, which dates back to Wald (1949) and more recently the seminal paper of Manski (2004). A relevant strand of literature is the literature on Empirical Welfare Maximization (EWM) (Kitagawa and Tetenov, 2018;Athey and Wager, 2021;Mbakop and Tabord-Meehan, 2021;Sun, 2024), which considers how to use sample data to optimally choose an eligibility criterion for a given policy. ...
Preprint
Full-text available
Policymakers often make changes to policies whose benefits and costs are unknown and must be inferred from statistical estimates in empirical studies. The sample estimates are noisier for some policies than for others, which should be adjusted for when comparing policy changes in decision-making. In this paper I consider the problem of a planner who makes changes to upfront spending on a set of policies to maximize social welfare but faces statistical uncertainty about the impact of those changes. I set up an optimization problem that is tractable under statistical uncertainty and solve for the Bayes risk-minimizing decision rule. I propose an empirical Bayes approach to approximating the optimal decision rule when the planner does not know a prior. I show theoretically that the empirical Bayes decision rule can approximate the optimal decision rule well, including in cases where a sample plug-in rule does not.
... The construction of multiplayer games with uncertainty can be viewed as the generalization of the theoretical framework explored in [13] 1 , which framed a single Agent's decision-making in the face of uncertainty as a game with a fictional player. This game theoretical framework of "statistical decisiontheory" [28,27,4] defines equilibrium strategies -in imaginary games played with adversarial fictional players controlling the unknown parameters -and interprets these strategies as decision-making heuristics for the Agent. ...
Preprint
Full-text available
This paper introduces a framework for finite non-cooperative games where each player faces a globally uncertain parameter with no common prior. Every player chooses both a mixed strategy and projects an emergent subjective prior to the uncertain parameters. We define an "Extended Equilibrium" by requiring that no player can improve her expected utility via a unilateral change of strategy, and the emergent subjective priors are such that they maximize the expected regret of the players. A fixed-point argument -- based on Brouwer's fixed point theorem and mimicking the construction of Nash -- ensures existence. Additionally, the "No Fictional Faith" theorem shows that any subjective equilibrium prior must stay non-concentrated if the parameter truly matters to a player. This approach provides a framework that unifies regret-based statistical decision theory and game theory, yielding a tool for handling strategic decision-making in the presence of deeply uncertain parameters.
... This is illustrated numerically in two examples given by a robust newsvendor problem and a robust portfolio choice problem. can be viewed as generalized decision-theoretic foundations of the classical decision rule of Wald (1950); see also Huber (1981). We contribute to this literature by developing corresponding optimization techniques. ...
Preprint
Full-text available
This paper studies distributionally robust optimization for a large class of risk measures with ambiguity sets defined by ϕ\phi-divergences. The risk measures are allowed to be non-linear in probabilities, are represented by a Choquet integral possibly induced by a probability weighting function, and include many well-known examples (for example, CVaR, Mean-Median Deviation, Gini-type). Optimization for this class of robust risk measures is challenging due to their rank-dependent nature. We show that for many types of probability weighting functions including concave, convex and inverse S-shaped, the robust optimization problem can be reformulated into a rank-independent problem. In the case of a concave probability weighting function, the problem can be further reformulated into a convex optimization problem with finitely many constraints that admits explicit conic representability for a collection of canonical examples. While the number of constraints in general scales exponentially with the dimension of the state space, we circumvent this dimensionality curse and provide two types of upper and lower bounds algorithms.They yield tight upper and lower bounds on the exact optimal value and are formally shown to converge asymptotically. This is illustrated numerically in two examples given by a robust newsvendor problem and a robust portfolio choice problem.
... Maximin is a conservative criterion because it selects the best possible outcome from the worst outcomes yielded by each available act. This criterion was originally suggested byWald (1950).Content courtesy of Springer Nature, terms of use apply. Rights reserved. ...
Article
Full-text available
In this paper, I examine Pascal’s Wager as a decision problem where the uncertainty is massive, that is, as a decision under ignorance. I first present several reasons to support this interpretation. Then, I argue that wagering for God is the optimal act in a broad range of cases, according to two well-known criteria for decision-making: the Minimax Regret rule and the Hurwicz criterion. Given a Pascalian standard matrix, I also show that a tie between wagering for God and wagering against God is only possible under very narrow conditions when applying the Hurwicz criterion. Finally, I discuss three objections to these two versions of the Wager. The most pressing challenge comes from the Many-Gods objection. I conclude that this objection is even more challenging whenever one faces a situation of massive uncertainty about God’s existence. I argue that addressing it requires a more detailed examination of the benefits of adopting a religious life compared to those of choosing non-theism, or an additional criterion that can break the tie between these options.
... The Breusch-Pagan LM test, the Pesaran LM test, the bias-corrected LM test, and the Pesaran CD test were used to determine whether there is a heteroskedasticity problem between the variables. The modified Wald test was used to determine if there was an autocorrelation problem [38]. The Wooldrige autocorrelation test was used to determine if there was an autocorrelation problem [39]. ...
Article
Full-text available
This study examines the impact of financial development and quality growth on environmental sustainability in European Union (EU) countries, making a significant contribution to the existing literature by introducing a composite index for environmental sustainability and emphasizing quality growth as a more inclusive alternative to traditional economic growth indicators. Unlike conventional studies, which often measure environmental sustainability using single indicators, this research introduces a composite index that includes both environmental damage (e.g., carbon emissions) and protective factors (e.g., forest area, renewable energy consumption). This innovative approach provides a more holistic assessment of environmental sustainability, distinguishing this study from existing research. The results emphasize the role of a robust financial system in promoting environmental sustainability, as each unit increase in financial development is positively correlated with the environmental sustainability ratio, encouraging investments and projects that prioritize environmental goals. In addition, the study shows that quality growth, which takes into account social welfare and resource efficiency in addition to economic expansion, is crucial for promoting sustainability. By focusing on quality growth, this study shifts the paradigm from mere quantitative economic expansion to a more comprehensive understanding of growth that integrates social and environmental dimensions. This nuanced approach contrasts with traditional models that focus on quantitative economic growth, highlighting that both quality growth and financial development are critical to supporting long-term environmental goals. This research provides actionable insights for policymakers by emphasizing the need for financial reforms, such as green bond markets and sustainable credit mechanisms, to support sustainable development.
... On the Shoulders of Giants Decision Theory, as a discipline originally rather belonging to theoretical economics and philosophy, was discovered early on as a perfectly natural language for addressing problems of mathematical statistics. Among the most prominent examples are Wald [1949], who reinterprets statistical inference procedures as two-player zero-sum games against nature and solves them by applying classic decision criteria such as minimax, Savage [1951] who gives an "informal exposition of it" and adds some "critical and philosophical remarks", or Hodges and Lehmann [1952], who propose to evaluate statistical decision functions by mixing their Bayes-risk and their minimax-score. Meanwhile, a decision-theoretic embedding for efficient and elegant solution of statistical problems has become absolutely standard (cf., e.g., Witting [1985], Berger [1985], Berger et al. [2000], French and Insua [2000], Liese and Miescke [2008] for standard textbooks) and has made it into the standard canon of courses at many statistics faculties. ...
Preprint
Full-text available
This habilitation thesis is cumulative and, therefore, is collecting and connecting research that I (together with several co-authors) have conducted over the last few years. Thus, the absolute core of the work is formed by the ten publications listed on page 5 under the name Contributions 1 to 10. The references to the complete versions of these articles are also found in this list, making them as easily accessible as possible for readers wishing to dive deep into the different research projects. The chapters following this thesis, namely Parts A to C and the concluding remarks, serve to place the articles in a larger scientific context, to (briefly) explain their respective content on a less formal level, and to highlight some interesting perspectives for future research in their respective contexts. Naturally, therefore, the following presentation has neither the level of detail nor the formal rigor that can (hopefully) be found in the papers. The purpose of the following text is to provide the reader an easy and high-level access to this interesting and important research field as a whole, thereby, advertising it to a broader audience.
... Das mathematisch-statistische Methodengebäude entstand bis zur Mitte des 20. Jahrhunderts nahezu ungestört, bis im Jahre 1950 die Arbeit »Statistical Decision Functions« von Abraham Wald (1902Wald ( -1950 erschien (Wald 1949). Sie steht für einen gewissen Abschluss der Mathematisierung der Statistischen Inferenz, wie die beiden Statistiker und Stanford-Professoren Bradley Efron (*1938) und Trevor Hastie (*1953 im Epilog ihres vor acht Jahren erschienenen Buchs »Computer Age Statistical Inference« schreiben (Efron and Hastie 2016). ...
... Neyman equated his concept of 'inductive behaviour' with the decision-theoretic idea of 'statistical decision-making' introduced by Wald (1950). In this view, one evaluates the expected loss associated from possible decisions over probability distribution of data for a specific hypothesis, where the hypothesis cannot itself be treated as a random variable (see Neyman, 1937Neyman, , 1957b. ...
Article
Full-text available
In this article I investigate the extent to which perspectival realism (PR) agrees with frequentist statistical methodology and philosophy, with an emphasis on J. Neyman’s frequentist statistical methods and philosophy. PR is clarified in the context of frequentist statistics. Based on the example of the stopping rule problem, PR is shown to be able to naturally be associated with frequentist statistics in general. I show that there are explicit and implicit aspects of Neyman’s methods and philosophy that are incompatible and both partially agree and disagree with PR. Additionally, I provide clarifications and interpretations to make Neyman’s methods and philosophy more coherent with the realist aspect of PR. Furthermore, I deliver an argument that, based on Neyman’s methods and philosophy, one is dealing with genuine and non-trivial perspectives. I argue that, despite Neyman being a normative anti-pluralist, there are some elements of perspectival pluralism present in his methods and philosophy. In conclusion, firstly, due to their ambivalence, Neyman’s conceptions align more closely with PR than with alternative, less moderate stances. Secondly, from the perspective of the statistical approach analysed, PR should be treated as a descriptive rather than a normative position, and as case (or aspect)-dependent, rather than a universal, absolute, or binding stance.
... These models all reduce to the expected utility model of Von Neumann and Morgenstern [65] when ambiguity has resolved in the classical Anscombe and Aumann [1] setup. A related strand of literature in financial mathematics is that of convex measures of risk introduced by Föllmer and Schied [26], Frittelli and Rosazza Gianin [32], and Heath and Ku [43], generalizing Artzner et al. [3]; see also the early Wald [66], Huber [44], Deprez and Gerber [19], Ben-Tal and Teboulle [6,7], and the more recent Carr, Geman and Madan [12], Ruszczyński and Shapiro [58] and Ben-Tal and Teboulle [8]. Föllmer and Schied [29,30] and Laeven and Stadje [47,48] provide precise connections between the two strands of the literature. ...
Preprint
We consider the problem of optimal risk sharing in a pool of cooperative agents. We analyze the asymptotic behavior of the certainty equivalents and risk premia associated with the Pareto optimal risk sharing contract as the pool expands. We first study this problem under expected utility preferences with an objectively or subjectively given probabilistic model. Next, we develop a robust approach by explicitly taking uncertainty about the probabilistic model (ambiguity) into account. The resulting robust certainty equivalents and risk premia compound risk and ambiguity aversion. We provide explicit results on their limits and rates of convergence, induced by Pareto optimal risk sharing in expanding pools.
... The alternative formulation exists [29] based on the OBs and OCs abundances that aims to restore their balance disrupted by MM. Productive and broadly accepted paradigm in the area of the multi-objective optimization is the Wald's "minimax" optimality [52,53,54]. The scalarization procedure was designed to solve multi-objective decision-making problems where the decisions are made on the basis of the worst possible choice. ...
Preprint
We developed simulation methodology to assess eventual therapeutic efficiency of exogenous multiparametric changes in a four-component cellular system described by the system of ordinary differential equations. The method is numerically implemented to simulate the temporal behavior of a cellular system of multiple myeloma cells. The problem is conceived as an inverse optimization task where the alternative temporal changes of selected parameters of the ordinary differential equations represent candidate solutions and the objective function quantifies the goals of the therapy. The system under study consists of two main cellular components, tumor cells and their cellular environment, respectively. The subset of model parameters closely related to the environment is substituted by exogenous time dependencies - therapeutic pulses combining continuous functions and discrete parameters subordinated thereafter to the optimization. Synergistic interaction of temporal parametric changes has been observed and quantified whereby two or more dynamic parameters show effects that absent if either parameter is stimulated alone. We expect that the theoretical insight into unstable tumor growth provided by the sensitivity and optimization studies could, eventually, help in designing combination therapies.
... See, for example, Imbens and Manski (2004), Chernozhukov, Hong and Tamer (2007) and Beresteanu and Molinari (2008). Another approach, with a firmer decision-theoretic foundation, would be to address the questionnaire design problem from the perspective of Wald (1950). ...
Preprint
This paper studies questionnaire design as a formal decision problem, focusing on one element of the design process: skip sequencing. We propose that a survey planner use an explicit loss function to quantify the trade-off between cost and informativeness of the survey and aim to make a design choice that minimizes loss. We pose a choice between three options: ask all respondents about an item of interest, use skip sequencing, thereby asking the item only of respondents who give a certain answer to an opening question, or do not ask the item at all. The first option is most informative but also most costly. The use of skip sequencing reduces respondent burden and the cost of interviewing, but may spread data quality problems across survey items, thereby reducing informativeness. The last option has no cost but is completely uninformative about the item of interest. We show how the planner may choose among these three options in the presence of two inferential problems, item nonresponse and response error.
... Our goal was to use concrete examples to provide more insight about Fisher information, something that may benefit psychologists who propose, develop, and compare mathematical models for psychological processes. Other uses of Fisher information are in the detection of model misspecification (Golden, 1995;Golden, 2000;Waldorp, Huizenga and Grasman, 2005;Waldorp, 2009;Waldorp, Christoffels and van de Ven, 2011;White, 1982), in the reconciliation of frequentist and Bayesian estimation methods through the Bernstein-von Mises theorem (Bickel and Kleijn, 2012;Rivoirard and Rousseau, 2012;van der Vaart, 1998;Yang and Le Cam, 2000), in statistical decision theory (e.g., Berger, 1985;Hájek, 1972;Korostelev and Korosteleva, 2011;Ray and Schmidt-Hieber, 2016;Wald, 1949), in the specification of objective priors for more complex models (e.g., Ghosal, Ghosh and Ramamoorthi, 1997;Grazian and Robert, 2015;Kleijn and Zhao, 2017), and computational statistics and generalized MCMC sampling in particular (e.g., Banterle et al., 2015;Girolami and Calderhead, 2011;Grazian and Liseo, 2014;. ...
Preprint
Full-text available
In many statistical applications that concern mathematical psychologists, the concept of Fisher information plays an important role. In this tutorial we clarify the concept of Fisher information as it manifests itself across three different statistical paradigms. First, in the frequentist paradigm, Fisher information is used to construct hypothesis tests and confidence intervals using maximum likelihood estimators; second, in the Bayesian paradigm, Fisher information is used to define a default prior; lastly, in the minimum description length paradigm, Fisher information is used to measure model complexity.
... This chronological arrangement is fortuitous insofar it introduces the simpler testing approach by Fisher first, then moves onto the more complex one by Neyman and Pearson, before tackling the incongruent hybrid approach represented by NHST (Gigerenzer, 2004;Hubbard, 2004). Other theories, such as Bayes's hypotheses testing (Lindley, 1965) and Wald's (1950) decision theory, are not object of this tutorial. ...
Preprint
Despite frequent calls for the overhaul of null hypothesis significance testing (NHST), this controversial procedure remains ubiquitous in behavioral, social and biomedical teaching and research. Little change seems possible once the procedure becomes well ingrained in the minds and current practice of researchers; thus, the optimal opportunity for such change is at the time the procedure is taught, be this at undergraduate or at postgraduate levels. This paper presents a tutorial for the teaching of data testing procedures, often referred to as hypothesis testing theories. The first procedure introduced is the approach to data testing followed by Fisher (tests of significance); the second is the approach followed by Neyman and Pearson (tests of acceptance); the final procedure is the incongruent combination of the previous two theories into the current approach (NSHT). For those researchers sticking with the latter, two compromise solutions on how to improve NHST conclude the tutorial.
... In the discrete-time case, the most general existence result was obtained in [29]; it establishes the existence of an optimal portfolio under the standard no-arbitrage condition NA (which is the same as our NA(P) when P is a singleton) for any concave nondecreasing function U , under the sole assumption that u(x) < ∞. Robust or "maxmin"-criteria as in (1.1) are classical in decision theory; the systematic analysis goes back at least to Wald (see the survey [36]). A solid axiomatic foundation was given in modern economics; a landmark paper in this respect is [18]. ...
Preprint
We give a general formulation of the utility maximization problem under nondominated model uncertainty in discrete time and show that an optimal portfolio exists for any utility function that is bounded from above. In the unbounded case, integrability conditions are needed as nonexistence may arise even if the value function is finite.
... For sufficient and necessary conditions under which such evaluations are time-consistent see for instance [21]. Robust expectations of the form above are also known in robust statistics, see Huber [35] or the earlier Wald [47]. ...
Preprint
We model a nonlinear price curve quoted in a market as the utility indifference curve of a representative liquidity supplier. As the utility function we adopt a g-expectation. In contrast to the standard framework of financial engineering, a trader is no more price taker as any trade has a permanent market impact via an effect to the supplier's inventory. The P&L of a trading strategy is written as a nonlinear stochastic integral. Under this market impact model, we introduce a completeness condition under which any derivative can be perfectly replicated by a dynamic trading strategy. In the special case of a Markovian setting the corresponding pricing and hedging can be done by solving a semi-linear PDE.
... It was seminal paper by Wald [21] where the background of the modern decision theory was established (cf. [22,Chapt. 7]). The decision theory approach to the control problems were immediately applied (see books by Sworder [16], Aoki [1], Sage and Melsa [14]). ...
Preprint
The main objective of this article is to present Bayesian optimal control over a class of non-autonomous linear stochastic discrete time systems with disturbances belonging to a family of the one parameter uniform distributions. It is proved that the Bayes control for the Pareto priors is the solution of a linear system of algebraic equations. For the case that this linear system is singular, we apply optimization techniques to gain the Bayesian optimal control. These results are extended to generalized linear stochastic systems of difference equations and provide the Bayesian optimal control for the case where the coefficients of these type of systems are non-square matrices. The paper extends the results of the authors developed for system with disturbances belonging to the exponential family.
... As mentioned previously, six different robustness metrics are used, as shown in Table 1. As can be seen, the Best-case (also known as Maximax) metric focuses on the best possible performance across all scenarios, while the Worst-case (also known as Maximin) metric focuses on the worst possible performance across all scenarios (Wald, 1950). The Hurwicz optimism-pessimism rule metric combines the two above-mentioned approaches by calculating the weighted mean of the best and worst possible performance (Hurwicz, 1951). ...
... In this section we discuss a concept of optimality related with multiple decision statistical procedures. According to [25] the quality of statistical procedure is defined by risk function. Consider a statistical procedure δ(x). ...
Preprint
Investigation of the market graph attracts a growing attention in market network analysis. One of the important problem connected with market graph is to identify it from observations. Traditional way for the market graph identification is to use a simple procedure based on statistical estimations of Pearson correlations between pairs of stocks. Recently a new class of statistical procedures for the market graph identification was introduced and optimality of these procedures in Pearson correlation Gaussian network was proved. However the obtained procedures have a high reliability only for Gaussian multivariate distributions of stocks attributes. One of the way to correct this drawback is to consider a different networks generated by different measures of pairwise similarity of stocks. A new and promising model in this context is the sign similarity network. In the present paper the market graph identification problem in sign similarity network is considered. A new class of statistical procedures for the market graph identification is introduced and optimality of these procedures is proved. Numerical experiments detect essential difference in quality of optimal procedures in sign similarity and Pearson correlation networks. In particular it is observed that the quality of optimal identification procedure in sign similarity network is not sensitive to the assumptions on distribution of stocks attributes.
... A version of the Maximin criterion will be presented. The Maximin criterion has been proposed by A. Wald [31], in a different framework. The criterion to be presented is a variation on this theme. ...
Preprint
Full-text available
A model for decision making that generalizes Expected Utility Maximization is presented. This model, Expected Qualitative Utility Maximization, encompasses the Maximin criterion. It relaxes both the Independence and the Continuity postulates. Its main ingredient is the definition of a qualitative order on nonstandard models of the real numbers and the consideration of nonstandard utilities. Expected Qualitative Utility Maximization is characterized by an original weakening of von Neumann-Morgenstern's postulates. Subjective probabilities may be defined from those weakened postulates, as Anscombe and Aumann did from the original postulates. Subjective probabilities are numbers, not matrices as in the Subjective Expected Lexicographic Utility approach. JEL no.: D81 Keywords: Utility Theory, Non-Standard Utilities, Qualitative Decision Theory
... Since then, many different extensions have been proposed [Wakker, 2010, Quiggin, 2012. Others propose to relax the probabilistic assumption, for instance by considering a possibilistic setting (e.g., Dubois et al. [2003] discuss Savage-like axioms), by considering sets of probabilities such as in decision under ambiguity [Gajdos et al., 2008], or by simply considering completely missing information, such as Wald's [1992] celebrated maximin criterion. ...
Preprint
Full-text available
Literature involving preferences of artificial agents or human beings often assume their preferences can be represented using a complete transitive binary relation. Much has been written however on different models of preferences. We review some of the reasons that have been put forward to justify more complex modeling, and review some of the techniques that have been proposed to obtain models of such preferences.
... The central goal is to approximate the optimal decision rule, f * , which minimizes the expected loss, E[ℓ(y t+h , f (x t ))]. This approach has its roots in the decision theory, see Wald (1949), and is adopted in statistical learning, see Vapnik (1999), and economic forecasting, see Granger and Pesaran (2000). For instance, when employing a quadratic loss function, ℓ(y t+h , f (x t )) = (y t+h − f (x t )) 2 , the optimal decision rule corresponds to the (non-linear) regression, f * (x t ) = E[y t+h |x t ] with respect to f (x t ). 1 The data-driven decision rules lead to the bias-variance trade-off in the forecasting performance. ...
... For a survey, see Carroll (2019). In particular, we follow Wald (1950), Savage (1951), Hurwicz and Shapiro (1978), Manski (2012), and Guo and Shmaya (2023a,b) in evaluating policies by their worst-case regret. The focus of this literature has typically been on the design of optimal compensation schemes between a principal and an agent or in a hierarchical organization (Walton and Carroll, 2022). 2 We depart by looking at the regulation problem and show that, as in the incentive provision problem, the robustness approach gives us sensible and economically meaningful predictions when the Bayesian problem remains largely intractable. ...
Preprint
Full-text available
We study the robust regulation of labour contracts in moral hazard problems. A firm offers a contract to incentivise production by an agent protected by limited liability. A regulator chooses the set of permissible contracts to (i) improve efficiency and (ii) protect the worker. The regulator ignores the agent's productive actions and the firm's costs and evaluates regulation by its worst-case regret. The regret-minimising regulation imposes a linear minimum wage, allowing all contracts above this linear threshold. The slope of the minimum contract balances the worker's protection - by ensuring they receive a minimal share of the production - and the necessary flexibility for incentive provision.
... Bhattacharya (2009) adopts this approach to study socially-optimal group formation. Manski and Tetenov (2014) study a version of the Wald (1949) treatment assignment problem in which the mean is replaced by the median for evaluating performance. ...
Preprint
Harsanyi (1955) showed that the only way to aggregate individual preferences into a social preference which satisfies certain desirable properties is ``utilitarianism'', whereby the social utility function is a weighted average of individual utilities. This representation forms the basis for welfare analysis in most applied work. We argue, however, that welfare analysis based on Harsanyi's version of utilitarianism may overlook important distributional considerations. We therefore introduce a notion of utilitarianism for discrete-choice settings which applies to \textit{social choice functions}, which describe the actions of society, rather than social welfare functions which describe society's preferences (as in Harsanyi). We characterize a representation of utilitarian social choice, and show that it provides a foundation for a family of \textit{distributional welfare measures} based on quantiles of the distribution of individual welfare effects, rather than averages.
... The idea of the Loss function was first time introduced by Laplace and redefined by Abraham Wald in the middle of the 20 th century. The cost function or loss function is a function in decision theory and mathematical optimization that draw values or an event of one or more variables into a real number automatic representing some "cost" connected with the values or event and used for parameter estimation in Statistics (Wald, 1949). Han (2020) discussed the reliability and E-posterior risk of E-Bayesian estimations under various loss functions such as SELF, WSELF, PELF and K loss function with binomial distribution. ...
Article
Full-text available
... The idea of the Loss function was first time introduced by Laplace and redefined by Abraham Wald in the middle of the 20 th century. The cost function or loss function is a function in decision theory and mathematical optimization that draw values or an event of one or more variables into a real number automatic representing some "cost" connected with the values or event and used for parameter estimation in Statistics (Wald, 1949). Han (2020) discussed the reliability and E-posterior risk of E-Bayesian estimations under various loss functions such as SELF, WSELF, PELF and K loss function with binomial distribution. ...
... (1) Wald (1950) proposes the concept of optimal decision rules as a statistical problem involving the choice of an action between two possible alternatives, Y, subject to exogenous variables, X. Then, the inference algorithm will separate the observations into two sets, A and B, as dissimilar as possible in terms of entropy, meaning the rule will maximize Shannon's Information Gain (1948). ...
Preprint
Full-text available
Nowadays, discussing Artificial Intelligence means discussing supervised multilayer neural networks solved with groups of thousands of GPUs with astronomical computational capacities. However, the foundation of AI is the simple rule, y(x) = 0 if x < k else 1, which statistically divides data into two subgroups. In this text, I will talk about rules, Shannon's information gain, the equivalence between rules and neural networks, and how viewing the neural network as a matrix operation unexpectedly made GPUs central to solving AI models.
... In this dam classification example, the upper limit of the estimated LOL would be selected, placing the dam in the VERY HIGH hazard group under the CDA approach. While Wald's criterion has been criticized as being extremely conservative even in a context of complete ignorance [29], for situations related to loss of life, it might be a good option to adopt. ...
Conference Paper
Full-text available
Dam safety risk analysis in its simplest form is based on estimates of the consequence and probability of occurrence of an undesirable event. However, in many instances these variables cannot be accurately determined due to aleatory and/or epistemic uncertainties leading to assessments that may not provide a true representation of risk. The use of the interval probability method is proposed as one means of dealing with uncertainties. For example, the Reclamation Consequences Estimating Methodology [1] defines the potential loss of life resulting from a given dam breach event as an upper limit (LOLU) and a lower limit (LOLL) which can lead to conflicting conclusions with respect to the risks posed by a dam. The paper proposes a method to deal with such potential conflicts and discusses practical problems associated with uncertainty, providing examples of how to resolve these problems. RÉSUMÉ L'analyse des risques pour la sécurité des barrages dans sa forme la plus simple est basée sur des estimations des conséquences et de la probabilité de survenance d'un événement indésirable. Cependant, dans de nombreux cas, ces variables ne peuvent pas être déterminées avec précision en raison d'incertitudes aléatoires et/ou épistémiques menant à des évaluations qui peuvent ne pas fournir une représentation fidèle du risque. L'utilisation de la méthode des probabilités par intervalles est proposée comme moyen de traiter les incertitudes. Par exemple, la Méthodologie d'estimation des conséquences de la remise en état (RCEM) définit la perte de vie potentielle résultant d'une rupture de barrage donnée comme une limite supérieure (LDU) et une limite inférieure (LOLL) qui peuvent mener à des conclusions contradictoires en ce qui concerne les risques posés par un barrage. Le document propose une méthode pour traiter ces conflits potentiels et examine les problèmes pratiques associés à l'incertitude, en fournissant des exemples de la façon de résoudre ces problèmes.
... Consequently, relevant facts will be obtained and processed by a background knowledge, in order to generate sufficient and necessary information to make the best possible decision. In the rich and long history of decision making methods, there is one method that is relevant for this work, and it is known as the sequential analysis, see (Dodge & Romig, 1929;Wald, 1947Wald, , 1950Thompson, 1933), and (Robbins, 1952). Sequential analysis refers to deciding when is the best time to terminate an experiment, and to take actions accordingly, see Chow et al. ...
Article
Full-text available
A sequential optimization model, known as the multi-armed bandit problem, is concerned with optimal allocation of resources between competing activities, in order to generate the most likely benefits, for a given period of time. In this work, following the objective of a multi-armed bandit problem, we consider a mean-field game model to approach to a large number of multi-armed bandit problems, and propose some connections between dynamic games and sequential optimization problems.
... Efficiency is a common metric in statistics [Wasserman, 2004, Lehmann and Romano, 2005, Wald, 1945, while regret is more common in computer science and especially in the multi-armed bandit literature [Lai and Robbins, 1985, Berry and Fristedt, 1985, Auer et al., 2002. Savage [1951] proposed minimax regret (which he called "loss," in distinction to "negative income") as a less pessimistic risk measure than the criterion explicitly considered in Wald's pioneering work on decision theory [Wald, 1950]; Savage [1951, p. 65] points out that Wald's theory includes minimax regret implicitly [Wald, 1950, p. 124]. The distinction between minimizing loss and minimizing regret is equivalent to that between the REGROW and GROW criteria for E-values [Grünwald et al., 2024]. ...
Preprint
Full-text available
We develop conservative tests for the mean of a bounded population using data from a stratified sample. The sample may be drawn sequentially, with or without replacement. The tests are "anytime valid," allowing optional stopping and continuation in each stratum. We call this combination of properties sequential, finite-sample, nonparametric validity. The methods express a hypothesis about the population mean as a union of intersection hypotheses describing within-stratum means. They test each intersection hypothesis using independent test supermartingales (TSMs) combined across strata by multiplication. The P-value of the global null hypothesis is then the maximum P-value of any intersection hypothesis in the union. This approach has three primary moving parts: (i) the rule for deciding which stratum to draw from next to test each intersection null, given the sample so far; (ii) the form of the TSM for each null in each stratum; and (iii) the method of combining evidence across strata. These choices interact. We examine the performance of a variety of rules with differing computational complexity. Approximately optimal methods have a prohibitive computational cost, while naive rules may be inconsistent -- they will never reject for some alternative populations, no matter how large the sample. We present a method that is statistically comparable to optimal methods in examples where optimal methods are computable, but computationally tractable for arbitrarily many strata. In numerical examples its expected sample size is substantially smaller than that of previous methods.
Article
Full-text available
This study addresses the urgent need for accurate health resource forecasting in Ghana's rural healthcare system, where over 60% of districts experience quarterly stockouts of essential medications and a doctor-to-population ratio as low as 1:11,000. To tackle these challenges, the study applies Bayesian Hierarchical Modeling (BHM), which integrates spatial heterogeneity, temporal variability, and prior data to forecast drug supply, personnel deployment, and equipment distribution. Using secondary data from 2020-2024 across five representative rural districts, the study employed spatial, temporal, and spatio-temporal Bayesian models. Statistical tests revealed significant results: a paired t-test for spatial effects showed t(4) = 6.45, p< 0.01; ANOVA for seasonal personnel deployment yielded F(3,16) = 5.62, p = 0.008; and integrating prior information improved equipment forecast accuracy from 72% to 78% (t(8) = 3.87, p = 0.005). The model achieved 84% forecast accuracy for antimalarials, with RMSE values as low as 3.2. A multiple regression model indicated that equipment efficiency (β = 0.58, p< 0.01), doctor-to-population ratio (β = 0.34, p = 0.04), and seasonal deployment (β = 0.28, p = 0.06) together explained 63% (R² = 0.63) of the variance in resource utilization. A strong correlation (r = 0.93) between equipment efficiency and utilization affirms the model's predictive power. These findings underscore the necessity of dynamic, localized, and evidence-based forecasting systems. The research recommends embedding BHM into national health information platforms to enable real-time, district-level planning. It also advocates for investments in data infrastructure and personnel training to sustain model-driven resource equity across rural Ghana.
Preprint
Full-text available
In policy debates concerning the governance and regulation of Artificial Intelligence (AI), both the Precautionary Principle (PP) and the Innovation Principle (IP) are advocated by their respective interest groups. Do these principles offer wholly incompatible and contradictory guidance? Does one necessarily negate the other? I argue here that provided attention is restricted to weak-form PP and IP, the answer to both of these questions is "No." The essence of these weak formulations is the requirement to fully account for type-I error costs arising from erroneously preventing the innovation's diffusion through society (i.e. mistaken regulatory red-lighting) as well as the type-II error costs arising from erroneously allowing the innovation to diffuse through society (i.e. mistaken regulatory green-lighting). Within the Signal Detection Theory (SDT) model developed here, weak-PP red-light (weak-IP green-light) determinations are optimal for sufficiently small (large) ratios of expected type-I to type-II error costs. For intermediate expected cost ratios, an amber-light 'wait-and-monitor' policy is optimal. Regulatory sandbox instruments allow AI testing and experimentation to take place within a structured environment of limited duration and societal scale, whereby the expected cost ratio falls within the 'wait-and-monitor' range. Through sandboxing regulators and innovating firms learn more about the expected cost ratio, and what respective adaptations --- of regulation, of technical solution, of business model, or combination thereof, if any --- are needed to keep the ratio out of the weak-PP red-light zone.
Article
Рассматривается задача анализа связей между доходностями акций фондового рынка. Связь измеряется как традиционным коэффициентом корреляции Пирсона, так и ранговым коэффициентом корреляции Кендалла. Предлагаются различные меры неопределенности выводов о связях на фондовых рынках, основанные на разделении выводов на значимые и допустимые. К предлагаемым мерам неопределенности относятся доля допустимых выводов к общему числу выводов и отношение числа допустимых выводов к числу значимых выводов о связях на фондовых рынках. Эти меры неопределенности разделяются на два типа. Меры первого типа определяются как функции от силы связи и дают подробную информацию об изменении неопределенности выводов о связях на фондовых рынках заданной силы. Меры второго типа или агрегированные показатели неопределенности выводов о связях не зависят от силы связи и характеризуют неопределенность рынка в целом. Проводится сравнение неопределенности выводов о связях на фондовых рынках России, США, Франции. Показано, что по доле допустимых выводов эти рынки различаются незначительно, независимо от используемого коэффициента корреляции и типа используемой меры неопределенности. Вместе с тем, по отношению числа допустимых к числу значимых выводов о связях фондовый рынок России является значительно более неопределенным. Предлагаемая методика может быть использована для более детального сравнительного анализа неопределенности выводов о связях на фондовых рынках различных стран. The problem of connections’ analysis between the stock returns is considered. The connections are measured by traditional Pearson correlation as well as rank Kendall correlation. Different measures of uncertainty conclusions on connections in the stock markets based on separation of the conclusions by significant and admissible are proposed. The proposed measures of uncertainty include the ratio of the number of insignificant but valid inferences to the total number of inferences and the ratio of the number of valid inferences to the number of significant inferences. These measures are divided into two types. Measures of the first type are defined as functions of the strength of the connection and provide detailed information on the change in the uncertainty of conclusions about connections in stock markets of a given strength. Measures of the second type or aggregate indicators of the uncertainty of conclusions about connections do not depend on the strength of the connection and characterize the uncertainty of the market as a whole. Comparison of uncertainty conclusions on connection in stock markets of Russia, USA and France is provided. It is shown that these markets differ slightly in the share of uncertain conclusions, regardless of the correlation coefficient used. At the same time, in terms of the number of admissible to the number of significant conclusions on connections, the Russian stock market is much more uncertain.
Article
We study minimax regret treatment rules under matched treatment assignment in a setup where a policymaker, informed by a sample of size N , needs to decide between T different treatments for a T2T\geq 2 . Randomized rules are allowed for. We show that the generalization of the minimax regret rule derived in Schlag (2006, ELEVEN—Tests needed for a recommendation , EUI working paper) and Stoye (2009, Journal of Econometrics 151, 70–81) for the case T=2 is minimax regret for general finite T>2T>2 and also that the proof structure via the Nash equilibrium and the “coarsening” approaches generalizes as well. We also show by example, that in the case of random assignment the generalization of the minimax rule in Stoye (2009, Journal of Econometrics 151, 70–81) to the case T>2T>2 is not necessarily minimax regret and derive minimax regret rules for a few small sample cases, e.g., for N=2 when T=3. In the case where a covariate x is included, it is shown that a minimax regret rule is obtained by using minimax regret rules in the “conditional-on- x ” problem if the latter are obtained as Nash equilibria.
Article
This paper addresses the challenge of uncertainty in monetary policy by incorporating local model uncertainty arising from heterogeneous expectations into a behavioral New Keynesian DSGE framework. Non-Bayesian control techniques are adopted to minimize a welfare loss derived from the second-order approximation of agents’ utilities to derive robust optimal policies. In the context of uncertainty regarding the formation of behavioral expectations, the importance of gradualism in monetary adjustments is emphasized. Policymakers should consider the ratio of rational to boundedly rational agents and anticipate that a significant presence of boundedly rational agents, especially those with long-horizon expectations, may require more dynamic adjustments to interest rates. Accounting for multidimensional uncertainty exponentially increases the complexity of the analysis and model indeterminacy.
Article
We study how to regulate a monopolistic firm using a robust-design, non-Bayesian approach. We derive a policy that minimizes the regulator’s worst-case regret, where regret is the difference between the regulator’s complete-information payoff and his realized payoff. When the regulator’s payoff is consumers’ surplus, he caps the firm’s average revenue. When his payoff is the total surplus of both consumers and the firm, he offers a piece rate subsidy to the firm while capping the total subsidy. For intermediate cases, the regulator combines these three policy instruments to balance three goals: protecting consumers’ surplus, mitigating underproduction, and limiting potential overproduction. (JEL D21, D42, D83, H25, L51)
Article
We study information aggregation with a decision-maker aggregating binary recommendations from symmetric agents. Each agent’s recommendation depends on her private information about a hidden state. While the decision-maker knows the prior distribution over states and the marginal distribution of each agent’s recommendation, the recommendations are adversarially correlated. The decision-maker’s goal is choosing a robustly optimal aggregation rule. We prove that for a large number of agents for the three standard robustness paradigms (maximin, regret, and approximation ratio), the unique optimal aggregation rule is “random dictator.” We further characterize the minimal regret for any number of agents through concavification. (JEL D81, D82, D83)
Article
Full-text available
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model stands out within the realm of related literature as one of the few studies to employ N-DM in the context of academic staff selection. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.
Article
Full-text available
Scenarios have emerged as valuable tools in managing complex human‐natural systems, but the traditional approach of limiting focus on a small number of predetermined scenarios can inadvertently miss consequential dynamics, extremes, and diverse stakeholder impacts. Exploratory modeling approaches have been developed to address these issues by exploring a wide range of possible futures and identifying those that yield consequential vulnerabilities. However, vulnerabilities are typically identified based on aggregate robustness measures that do not take full advantage of the richness of the underlying dynamics in the large ensembles of model simulations and can make it hard to identify key dynamics and/or storylines that can guide planning or further analyses. This study introduces the FRamework for Narrative Storylines and Impact Classification (FRNSIC; pronounced “forensic”): a scenario discovery framework that addresses these challenges by organizing and investigating consequential scenarios using hierarchical classification of diverse outcomes across actors, sectors, and scales, while also aiding in the selection of scenario storylines, based on system dynamics that drive consequential outcomes. We present an application of this framework to the Upper Colorado River Basin, focusing on decadal droughts and their water scarcity implications for the basin's diverse users and its obligations to downstream states through Lake Powell. We show how FRNSIC can explore alternative sets of impact metrics and drought dynamics and use them to identify drought scenario storylines, that can be used to inform future adaptation planning.
ResearchGate has not been able to resolve any references for this publication.