You are here

Reinforcement Learning via Practice and Critique Advice

TitleReinforcement Learning via Practice and Critique Advice
Publication TypeConference Paper
Year of Publication2010
AuthorsJudah, K., S. Roy, A. Fern, and T. G. Dietterich
Conference NameAAAI Conference on Artificial Intelligence (AAAI-10)
Date Published07/2010
Conference LocationAtlanta, GA

We consider the problem of incorporating end-user advice into reinforcement learning (RL). In our setting, the learner alternates between practicing, where learning is based on actual world experience, and end-user critique sessions where advice is gathered. During each critique session the end-user is allowed to analyze a trajectory of the current policy and then label an arbitrary subset of the available actions as good or bad. Our main contribution is an approach for integrating all of the information gathered during practice and critiques in order to effectively optimize a parametric policy. The approach optimizes a loss function that linearly combines losses measured against the world experience and the critique data. We evaluate our approach using a prototype system for teaching tactical battle behavior in a real-time strategy game engine. Results are given for a significant evaluation involving ten end-users showing the promise of this approach and also highlighting challenges involved in inserting end-users into the RL loop.