Article

Bounding Procedures for Stochastic Dynamic Programs with Application to the Perimeter Patrol Problem

Proceedings of the American Control Conference 08/2011; abs/1108.3299.
Source: DBLP

ABSTRACT One often encounters the curse of dimensionality in the application of
dynamic programming to determine optimal policies for controlled Markov chains.
In this paper, we provide a method to construct sub-optimal policies along with
a bound for the deviation of such a policy from the optimum via a linear
programming approach. The state-space is partitioned and the optimal cost-to-go
or value function is approximated by a constant over each partition. By
minimizing a non-negative cost function defined on the partitions, one can
construct an approximate value function which also happens to be an upper bound
for the optimal value function of the original Markov Decision Process (MDP).
As a key result, we show that this approximate value function is {\it
independent} of the non-negative cost function (or state dependent weights as
it is referred to in the literature) and moreover, this is the least upper
bound that one can obtain once the partitions are specified. Furthermore, we
show that the restricted system of linear inequalities also embeds a family of
MDPs of lower dimension, one of which can be used to construct a lower bound on
the optimal value function. The construction of the lower bound requires the
solution to a combinatorial problem. We apply the linear programming approach
to a perimeter surveillance stochastic optimal control problem and obtain
numerical results that corroborate the efficacy of the proposed methodology.

0 Bookmarks
 · 
87 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this contribution I review some recent developments in light quark spectroscopy, from a somewhat theoretical viewpoint. I concentrate almost exclusively on light mesons, since this area of hadron physics has seen the greatest share of recent advances. In theory I will discuss recent LGT results for hybrid meson masses, hybrid decay calculations and conventional quarkonium decay calculations. In experiment I review new or recent candidates for higher quarkonium states (the new experimental hybrid candidates are reviewed by other contributers in these proceedings), and then discuss an interesting miscellany in gammagamma physics, K1 mixing angles, KK molecules and Ds decays.
    AIP Conference Proceedings. 05/1998; 432(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Not Available
    Applications of Signal Processing to Audio and Acoustics, 1991. Final Program and Paper Summaries., 1991 IEEE ASSP Workshop on; 11/1991
  • [Show abstract] [Hide abstract]
    ABSTRACT: One often encounters the curse of dimensionality in the application of dynamic programming to determine optimal policies for controlled Markov chains. In this paper, we provide a method to construct sub-optimal policies along with a bound for the deviation of such a policy from the optimum through the use of restricted linear programming. The novelty of this approach lies in circumventing the need for a value iteration or a linear program defined on the entire state-space. Instead, the state-space is partitioned based on the reward structure and the optimal cost-to-go or value function is approximated by a constant over each partition. We associate a meta-state with each partition, where the transition probabilities between these meta-states can be derived from the original Markov chain specification. The state aggregation approach results in a significant reduction in the computational burden and lends itself to a restricted linear program defined on the aggregated state-space. Finally, the proposed method is bench marked on a perimeter surveillance stochastic control problem.
    Proceedings of the 49th IEEE Conference on Decision and Control, CDC 2010, December 15-17, 2010, Atlanta, Georgia, USA; 01/2010

Full-text (2 Sources)

Download
55 Downloads
Available from
May 26, 2014