Nothing in life is to be feared, it is only to be understood.
Now is the time to understand more, so that we may fear less.

— Marie Curie

Selected Publications

In this paper, we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean. We give results that close a number of gaps between the theoretical and algorithmic results given by Bellemare, Dabney, and Munos (2017). First, we extend existing results to the approximate distribution setting. Second, we present a novel distributional reinforcement learning algorithm consistent with our theoretical formulation. Finally, we evaluate this new algorithm on the Atari 2600 games, observing that it significantly outperforms many of the recent improvements on DQN, including the related distributional algorithm C51.
AAAI, 2018

In this paper we argue for the fundamental importance of the value distribution: the distribution of the random return received by a reinforcement learning agent. This is in contrast to the common approach to reinforcement learning which models the expectation of this return, or value.
ICML, 2017

Recent Publications

. Distributional Policy Gradients. ICLR, 2018.

PDF Project Video

. The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning. ICLR, 2018.

PDF Project

. An Analysis of Categorical Distributional Reinforcement Learning. AISTATS, 2018.

. Distributional Reinforcement Learning with Quantile Regression. AAAI, 2018.

PDF Project

. Rainbow: Combining Improvements in Deep Reinforcement Learning. AAAI, 2018.

PDF Project

. Successor Features for Transfer in Reinforcement Learning. NIPS, 2017.

PDF

. A Distributional Perspective on Reinforcement Learning. ICML, 2017.

PDF Project