Successor Uncertainties: Exploration and Uncertainty in Temporal Difference Learning

Successor Uncertainties: Exploration and Uncertainty in Temporal Difference Learning

Abstract

Posterior sampling for reinforcement learning (PSRL) is an effective method for balancing exploration and exploitation in reinforcement learning. Randomised value functions (RVF) can be viewed as a promising approach to scaling PSRL. However, we show that most contemporary algorithms combining RVF with neural network function approximation do not possess the properties which make PSRL effective, and provably fail in sparse reward problems. Moreover, we find that propagation of uncertainty, a property of PSRL previously thought important for exploration, does not preclude this failure. We use these insights to design Successor Uncertainties (SU), a cheap and easy to implement RVF algorithm that retains key properties of PSRL. SU is highly effective on hard tabular exploration benchmarks. Furthermore, on the Atari 2600 domain, it surpasses human performance on 38 of 49 games tested (achieving a median human normalised score of 2.09), and outperforms its closest RVF competitor, Bootstrapped DQN, on 36 of those.

Grafik Top
Authors
  • Janz, David
  • Hron, Jiri
  • Mazur, Przemysław
  • Hofmann, Katja
  • Hernández-Lobato, José Miguel
  • Tschiatschek, Sebastian
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
33ed Conference Neural Information Processing Systems (NeurIPS)
Divisions
Data Mining and Machine Learning
Event Location
Vancouver, Canada
Event Type
Conference
Event Dates
08.-14.12.2019
Series Name
Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
Date
8 December 2019
Official URL
https://arxiv.org/pdf/1810.06530.pdf
Export
Grafik Top