OREGON STATE UNIVERSITY

You are here

An Overview of MAXQ Hierarchical Reinforcement Learning

TitleAn Overview of MAXQ Hierarchical Reinforcement Learning
Publication TypeConference Paper
Year of Publication2000
AuthorsDietterich, T. G.
Conference NameProceedings of the 4th International Symposium on Abstraction, Reformulation, and Approximation
Pagination26–44
Date Published07/2000
PublisherSpringer-Verlag
Conference LocationLondon, UK
ISBN Number3-540-67839-5
Abstract

Reinforcement learning addresses the problem of learning optimal policies for sequential decision-making problems involving stochastic operators and numerical reward functions rather than the more traditional deterministic operators and logical goal predicates. In many ways, reinforcement learning research is recapitulating the development of classical research in planning and problem solving. After studying the problem of solving "flat" problem spaces, researchers have recently turned their attention to hierarchical methods that incorporate subroutines and state abstractions. This paper gives an overview of the MAXQ value function decomposition and its support for state abstraction and action abstraction.

URLhttp://dl.acm.org/citation.cfm?id=645847.668653