Article

Markov Decision Processes with Average-Value-at-Risk criteria

Mathematical Methods of Operational Research (Impact Factor: 0.63). 12/2011; 74(3):361-379. DOI: 10.1007/s00186-011-0367-0

ABSTRACT

We investigate the problem of minimizing the Average-Value-at-Risk (AVaR
τ
) of the discounted cost over a finite and an infinite horizon which is generated by a Markov Decision Process (MDP). We show that this problem can be reduced to an ordinary MDP with extended state space and give conditions under which an optimal policy exists. We also give a time-consistent interpretation of the AVaR
τ
. At the end we consider a numerical example which is a simple repeated casino game. It is used to discuss the influence of the risk aversion parameter τ of the AVaR
τ
-criterion.

Full-text preview

Available from: math.kit.edu
  • Source
    • "The second method is to re-write the original problem in a form where we can apply dynamic programming in an indirect way. This approach has been used to reduce to dynamic programming in a higher-dimensional state space in [5] [32] [28] [23]. The rationale in this paper is similar in spirit to this so-called indirect dynamic programming method. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider continuous-time stochastic optimal control problems featuring Conditional Value-at-Risk (CVaR) in the objective. The major difficulty in these problems arises from time-inconsistency, which prevents us from directly using dynamic programming. To resolve this challenge, we convert to an equivalent bilevel optimization problem in which the inner optimization problem is standard stochastic control. Furthermore, we provide conditions under which the outer objective function is convex and differentiable. We compute the outer objective's value via a Hamilton-Jacobi-Bellman equation and its gradient via the viscosity solution of a linear parabolic equation, which allows us to perform gradient descent. The significance of this result is that we provide an efficient dynamic programming-based algorithm for optimal control of CVaR without lifting the state-space. To broaden the applicability of the proposed algorithm, we provide convergent approximation schemes in cases where our key assumptions do not hold and characterize relevant suboptimality bounds. In addition, we extend our method to a more general class of risk metrics, which includes mean-variance and median-deviation. We also demonstrate a concrete application to portfolio optimization under CVaR constraints. Our results contribute an efficient framework for solving time-inconsistent CVaR-based dynamic optimization.
    Full-text · Article · Dec 2015
  • Source
    • "We assume that there exists a policy µ(·|·; θ) such that CVaR α D θ (x 0 ) ≤ β (feasibility assumption). As discussed in Section 1, Bäuerle and Ott [19] [3] showed that there exists a deterministic history-dependent optimal policy for CVaR optimization. The important point is that this policy does not depend on the complete history, but only on the current time step t, current state of the system x t , and accumulated discounted cost t i=0 γ i c(x i , a i ). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in costs in addition to minimizing a standard criterion. Conditional value-at-risk (CVaR) is a relatively new risk measure that addresses some of the shortcomings of the well-known variance-related risk measures, and because of its computational efficiencies has gained popularity in finance and operations research. In this paper, we consider the mean-CVaR optimization problem in MDPs. We first derive a formula for computing the gradient of this risk-sensitive objective function. We then devise policy gradient and actor-critic algorithms that each uses a specific method to estimate this gradient and updates the policy parameters in the descent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in an optimal stopping problem.
    Full-text · Article · Jun 2014 · Advances in neural information processing systems
  • [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite horizon that is generated by a Markov decision process (MDP). In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by an ordinary MDP with extended state space and give conditions under which an optimal policy exists. In the case of an infinite time horizon we show that the minimal discounted cost can be obtained by value iteration and can be characterized as the unique solution of a fixed-point equation using a “sandwich” argument. Interestingly, it turns out that in the case of a power utility, the problem simplifies and is of similar complexity than the exponential utility case, however has not been treated in the literature so far. We also establish the validity (and convergence) of the policy improvement method. A simple numerical example, namely, the classical repeated casino game, is considered to illustrate the influence of the certainty equivalent and its parameters. Finally, the average cost problem is also investigated. Surprisingly, it turns out that under suitable recurrence conditions on the MDP for convex power utility, the minimal average cost does not depend on the parameter of the utility function and is equal to the risk-neutral average cost. This is in contrast to the classical risk-sensitive criterion with exponential utility.
    No preview · Article · Feb 2014 · Mathematics of Operations Research
Show more