Markov Decision Processes with Average-Value-at-Risk criteria

Mathematical Methods of Operational Research (Impact Factor: 0.63). 12/2011; 74(3):361-379. DOI: 10.1007/s00186-011-0367-0


We investigate the problem of minimizing the Average-Value-at-Risk (AVaR

) of the discounted cost over a finite and an infinite horizon which is generated by a Markov Decision Process (MDP). We show
that this problem can be reduced to an ordinary MDP with extended state space and give conditions under which an optimal policy
exists. We also give a time-consistent interpretation of the AVaR

. At the end we consider a numerical example which is a simple repeated casino game. It is used to discuss the influence of
the risk aversion parameter τ of the AVaR


KeywordsMarkov Decision Problem–Average-Value-at-Risk–Time-consistency–Risk aversion

28 Reads
  • Source
    • "We assume that there exists a policy µ(·|·; θ) such that CVaR α D θ (x 0 ) ≤ β (feasibility assumption). As discussed in Section 1, Bäuerle and Ott [19] [3] showed that there exists a deterministic history-dependent optimal policy for CVaR optimization. The important point is that this policy does not depend on the complete history, but only on the current time step t, current state of the system x t , and accumulated discounted cost t i=0 γ i c(x i , a i ). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in costs in addition to minimizing a standard criterion. Conditional value-at-risk (CVaR) is a relatively new risk measure that addresses some of the shortcomings of the well-known variance-related risk measures, and because of its computational efficiencies has gained popularity in finance and operations research. In this paper, we consider the mean-CVaR optimization problem in MDPs. We first derive a formula for computing the gradient of this risk-sensitive objective function. We then devise policy gradient and actor-critic algorithms that each uses a specific method to estimate this gradient and updates the policy parameters in the descent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in an optimal stopping problem.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite horizon that is generated by a Markov decision process (MDP). In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by an ordinary MDP with extended state space and give conditions under which an optimal policy exists. In the case of an infinite time horizon we show that the minimal discounted cost can be obtained by value iteration and can be characterized as the unique solution of a fixed-point equation using a “sandwich” argument. Interestingly, it turns out that in the case of a power utility, the problem simplifies and is of similar complexity than the exponential utility case, however has not been treated in the literature so far. We also establish the validity (and convergence) of the policy improvement method. A simple numerical example, namely, the classical repeated casino game, is considered to illustrate the influence of the certainty equivalent and its parameters. Finally, the average cost problem is also investigated. Surprisingly, it turns out that under suitable recurrence conditions on the MDP for convex power utility, the minimal average cost does not depend on the parameter of the utility function and is equal to the risk-neutral average cost. This is in contrast to the classical risk-sensitive criterion with exponential utility.
    Mathematics of Operations Research 02/2014; 39(1). DOI:10.1287/moor.2013.0601 · 1.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Conditional Value at Risk (CVaR) is a prominent risk measure that is being used extensively in various domains such as finance. In this work we present a new formula for the gradient of the CVaR in the form of a conditional expectation. Our result is similar to policy gradients in the reinforcement learning literature. Based on this formula, we propose novel sampling-based estimators for the CVaR gradient, and a corresponding gradient descent procedure for CVaR optimization. We evaluate our approach in learning a risk-sensitive controller for the game of Tetris, and propose an importance sampling procedure that is suitable for such domains.
Show more


28 Reads
Available from