BookPDF Available

Abstract

From the Publisher: This is the first textbook that fully explains the neuro-dynamic programming/reinforcement learning methodology, which is a recent breakthrough in the practical application of neural networks and dynamic programming to complex problems of planning, optimal decision making, and intelligent control.
... which is a contraction (Bertsekas and Tsitsiklis 1996) with the unique fixed point Q π (s, a). ...
Article
Mixed Integer Linear Program (MILP) solvers are mostly built upon a Branch-and-Bound (B&B) algorithm, where the efficiency of traditional solvers heavily depends on hand-crafted heuristics for branching. The past few years have witnessed the increasing popularity of data-driven approaches to automatically learn these heuristics. However, the success of these methods is highly dependent on the availability of high-quality demonstrations, which requires either the development of near-optimal heuristics or a time-consuming sampling process. This paper averts this challenge by proposing Suboptimal-Demonstration-Guided Reinforcement Learning (SORREL) for learning to branch. SORREL selectively learns from suboptimal demonstrations based on value estimation. It utilizes suboptimal demonstrations through both offline reinforcement learning on the demonstrations generated by suboptimal heuristics and self-imitation learning on past good experiences sampled by itself. Our experiments demonstrate its advanced performance in both branching quality and training efficiency over previous methods for various MILPs.
... RL is a computational technique, whose principle is to optimize a control policy by maximizing the expected cumulative reward from the current state to the final state based on a trial-and-error approach 9,10 . Model-based RL methods, such as policy iteration (PI) and value iteration (VI), require complete knowledge of the system's dynamics, which limits the applicability of RL techniques 11 . To address this issue, a Q-learning algorithm has been proposed to compute optimal control policies in the absence of any specific knowledge of the system dynamics 12 . ...
Preprint
This article investigates the optimal control problem with disturbance rejection for discrete-time multi-agent systems under cooperative and non-cooperative graphical games frameworks. Given the practical challenges of obtaining accurate models, Q-function-based policy iteration methods are proposed to seek the Nash equilibrium solution for the cooperative graphical game and the distributed minmax solution for the non-cooperative graphical game. To implement these methods online, two reinforcement learning frameworks are developed, an actor-disturber-critic structure for the cooperative graphical game and an actor-adversary-disturber-critic structure for the non-cooperative graphical game. The stability of the proposed methods is rigorously analyzed, and simulation results are provided to illustrate the effectiveness of the proposed methods.
Article
We investigate the convergence properties of policy iteration and value iteration algorithms in reinforcement learning by leveraging fixed-point theory, with a focus on mappings that exhibit weak contractive behavior. Unlike traditional studies that rely on strong contraction properties, such as those defined by the Banach contraction principle, we consider a more general class of mappings that includes weak contractions. Employing Zamfirscu’s fixed-point theorem, we establish sufficient conditions for norm convergence in infinite-dimensional policy spaces under broad assumptions. Our approach extends the applicability of these algorithms to feedback control problems in reinforcement learning, where standard contraction conditions may not hold. Through illustrative examples, we demonstrate that this framework encompasses a wider range of operators, offering new insights into the robustness and flexibility of iterative methods in dynamic programming.
Chapter
Harnessing the power of data and AI methods to tackle complex societal challenges requires transdisciplinary collaborations across academia, industry, and government. In this compelling book, Munther A. Dahleh, founder of the MIT Institute for Data, Systems, and Society (IDSS), offers a blueprint for researchers, professionals, and institutions to create approaches to problems of high societal value using innovative, holistic, data-driven methods. Drawing on his experience at IDSS and knowledge of similar initiatives elsewhere, Dahleh describes in clear, non-technical language how statistics, data science, information and decision systems, and social and institutional behavior intersect across multiple domains. He illustrates key concepts with real-life examples from optimizing transportation to making healthcare decisions during pandemics to understanding the media's impact on elections and revolutions. Dahleh also incorporates crucial concepts such as robustness, causality, privacy, and ethics and shares key lessons learned about transdisciplinary communication and about unintended consequences of AI and algorithmic systems.
Article
Full-text available
With the advancement in computing power and data science techniques, reinforcement learning (RL) has emerged as a powerful tool for decision-making problems in complex systems. In recent years, the research on RL for healthcare operations has grown rapidly. Especially during the COVID-19 pandemic, RL has played a critical role in optimizing decisions with greater degrees of uncertainty. RL for healthcare applications has been an exciting topic across multiple disciplines, including operations research, operations management, healthcare systems engineering, and data science. This review paper first provides a tutorial on the overall framework of RL, including its key components, training models, and approximators. Then, we present the recent advances of RL in the domain of healthcare operations management (HOM) and analyze the current trends. Our paper concludes by presenting existing challenges and future directions for RL in HOM.
Article
Full-text available
Learning methods based on dynamic programming (DP) are receiving increasing attention in artificial intelligence. Researchers have argued that DP provides the appropriate basis for compiling planning results into reactive strategies for real-time control, as well as for learning such strategies when the system being controlled is incompletely known. We introduce an algorithm based on DP, which we call Real-Time DP (RTDP), by which an embedded system can improve its performance with experience. RTDP generalizes Korf's Learning-Real-Time-A* algorithm to problems involving uncertainty. We invoke results from the theory of asynchronous DP to prove that RTDP achieves optimal behavior in several different classes of problems. We also use the theory of asynchronous DP to illuminate aspects of other DP-based reinforcement learning methods such as Watkins' Q-Learning algorithm. A secondary aim of this article is to provide a bridge between AI research on real-time planning and learning and relevant concepts and algorithms from control theory.
Article
Full-text available
Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(λ) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(λ) and Q-learning belong.
Article
This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Article
This paper examines whether temporal difference methods for training connectionist networks, such as Sutton''s TD() algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD() is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating.
Conference Paper
Provides some general results on the convergence of a class of stochastic approximation algorithms and their parallel and asynchronous variants. The author then uses these results to study the Q-learning algorithm, a reinforcement learning method for solving Markov decision problems, and establishes its convergence under conditions more general than previously available
Learning from delayed rewards
  • Cjch Watkins