Bounding Procedures for Stochastic Dynamic Programs with Application tothe Perimeter Patrol Problem

Conference Paper (PDF Available)inProceedings of the American Control Conference abs/1108.3299 · June 2012with25 Reads
DOI: 10.1109/ACC.2012.6314780 · Source: arXiv
Conference: American Control Conference, At Monteal, QC, CA
Abstract
One often encounters the curse of dimensionality in the application of dynamic programming to determine optimal policies for controlled Markov chains. In this paper, we provide a method to construct sub-optimal policies along with a bound for the deviation of such a policy from the optimum via a linear programming approach. The state-space is partitioned and the optimal cost-to-go or value function is approximated by a constant over each partition. By minimizing a positive cost function defined on the partitions, one can construct an approximate value function which also happens to be an upper bound for the optimal value function of the original Markov Decision Process (MDP). As a key result, we show that this approximate value function is independent of the positive cost function (or state dependent weights; as it is referred to, in the literature) and moreover, this is the least upper bound that one can obtain; once the partitions are specified. We apply the linear programming approach to a perimeter surveillance stochastic optimal control problem; whose structure enables efficient computation of the upper bound.