A Standard Benchmarking System for Reinforcement Learning

Front Cover
University of Alberta (Canada), 2006 - Reinforcement learning - 74 pages
0 Reviews
We introduce a standard framework for benchmarking in reinforcement learning. Benchmarks facilitate the comparison of alternative algorithms and can greatly accelerate research progress. The University of California Irvine (UCI) machine learning database, for example, was very effective for driving progress in supervised learning. Creating a similar benchmarking resource for reinforcement learning is more challenging because reinforcement learning agents and environments interact to generate observations, actions and rewards. The observations and rewards received by the learning agent depend on the actions; these training data cannot simply be stored in a file as they are in supervised learning. Instead, the reinforcement learning agent and environment must be interacting programs. Our benchmarking framework is a standard for communication between these programs. Our protocol (1) guarantees exact reproducibility of the execution sequence of a learning experiment, (2) enables plug and play interchanging of environments and agents, (3) is general and powerful yet non-intrusive, (4) is easy to convert existing agents and environments to. The current implementation features a light-weight software design with layered functionality, support for multiple programming languages and support for agent and environment interaction across a network. We illustrate these advantages with examples from the newly established University of Alberta Reinforcement Learning Library, which is based on our protocol. We conclude by presenting benchmarks for the Grid-world, General Cat and Mouse, Schapire Cat and Mouse, Blackjack, Sensor Network, GARNET, Acrobot, Random, Delayed, Stochastic, and Non-stationary Mountain Car tasks.

What people are saying - Write a review

We haven't found any reviews in the usual places.

Bibliographic information