Reinforcement learning via approximation of the Q-function

ArticleinJournal of Experimental & Theoretical Artificial Intelligence 22(3):219-235 · September 2010with4 Reads
Impact Factor: 1.00 · DOI: 10.1080/09528130903157377 · Source: DBLP

    Abstract

    Relational reinforcement learning (RRL) combines traditional reinforcement learning (RL) with a strong emphasis on a relational (rather than attribute-value) representation. Earlier work used RRL on a learning version of the classic Blocks World planning problem (a version where the learner does not know what the result of taking an action will be) and the Tetris game. Learning results based on the structure of training examples were obtained, such as learning in a mixed 3–5 block environment and being able to perform in a 3 or 10 block environment. Here, we instead take a function approximation approach to RL for the Blocks World problem. We obtain similar learning accuracies, with better running times, allowing us to consider much larger problem sizes. For instance, we can train on 15 blocks and then perform well on worlds with 100–800 blocks–using less running time than the relational method required to perform well for 3–10 blocks.