Article

Reinforcement learning via approximation of the Q-function.

J. Exp. Theor. Artif. Intell 01/2010; 22:219-235. DOI: 10.1080/09528130903157377
Source: DBLP

ABSTRACT Relational reinforcement learning (RRL) combines traditional reinforcement learning (RL) with a strong emphasis on a relational (rather than attribute-value) representation. Earlier work used RRL on a learning version of the classic Blocks World planning problem (a version where the learner does not know what the result of taking an action will be) and the Tetris game. Learning results based on the structure of training examples were obtained, such as learning in a mixed 3–5 block environment and being able to perform in a 3 or 10 block environment. Here, we instead take a function approximation approach to RL for the Blocks World problem. We obtain similar learning accuracies, with better running times, allowing us to consider much larger problem sizes. For instance, we can train on 15 blocks and then perform well on worlds with 100–800 blocks–using less running time than the relational method required to perform well for 3–10 blocks.

0 Bookmarks
 · 
43 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: The main challenge in the area of reinforcement learning is scaling up to larger and more complex problems. Aiming at the scaling problem of reinforcement learning, a scalable reinforcement learning method, DCS-SRL, is proposed on the basis of divide-and-conquer strategy, and its convergence is proved. In this method, the learning problem in large state space or continuous state space is decomposed into multiple smaller subproblems. Given a specific learning algorithm, each subproblem can be solved independently with limited available resources. In the end, component solutions can be recombined to obtain the desired result. To address the question of prioritizing subproblems in the scheduler, a weighted priority scheduling algorithm is proposed. This scheduling algorithm ensures that computation is focused on regions of the problem space which are expected to be maximally productive. To expedite the learning process, a new parallel method, called DCS-SPRL, is derived from combining DCS-SRL with a parallel scheduling architecture. In the DCS-SPRL method, the subproblems will be distributed among processors that have the capacity to work in parallel. The experimental results show that learning based on DCS-SPRL has fast convergence speed and good scalability.
    Frontiers of Computer Science (print) 01/2012; 6(6). · 0.41 Impact Factor