I am a Research Scientist at DeepMind in London. Before that, I completed my Ph.D in Computer Science and Masters in Philosophy at Brown University, where I was fortunate to be advised by Prof. Michael Littman (CS), and Prof. Joshua Schechter (Philosophy).
My research focuses on bringing clarity to the central philosophical questions surrounding computation and learning.
I value research that concentrates on providing new understanding, and tend to get excited by simple but foundational questions. I typically work with the reinforcement learning problem, drawing on tools and perspectives from computational learning theory, computational complexity, and analytic philosophy.
I am currently interested in better defining the AI problem. Previously, my dissertation studied how effective agents model the worlds they inhabit, focusing on the representational practices that underly effective learning and planning.
On the Expressivity of Markov Reward
NeurIPS 2021
We study the expressivity of Markov reward functions in finite environments by analysing what kinds of tasks such functions can express.
Joint work with Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh.
|
|
A Theory of Abstraction in Reinforcement Learning
Ph.D Thesis, 2020
My dissertation, aimed at understanding abstraction and its role in effective reinforcement learning.
Advised by Michael L. Littman.
|
|
Value Preserving State-Action Abstractions
AISTATS 2020
We prove which combinations of state abstractions and options are guaranteed to preserve representation of near-optimal policies in any finite Markov Decision Process.
Joint work with Nathan Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, and Michael L. Littman.
|
|
We examine the Lipschitz continuity of value functions and MDPs, then exploit these properties to develop a PAC-MDP algorithm for lifelong RL called Lipschitz RMax.
Led by Erwan Lecarpentier, joint with Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, and Michael L. Littman.
|
|
We develop a theory of affordances in the context of RL and planning.
|
|
We develop a model that characterizes the planned use of information processing as a meta-reasoning problem and study this model's capacity to predict human reaction times in simple tasks.
|
|
The Value of Abstraction
Current Opinions in Behavioral Science 2019
We discuss the vital role that abstraction plays in efficient decision making.
|
|
The Expected-Length Model of Options
IJCAI 2019
We introduce and motivate the Expected-Length Model of Options, a simpler alternative for characterizing the transition and reward functions of options.
|
|
We study state abstractions that trade-off between compression and optimality through rate-distortion theory.
|
|
We prove that the problem of finding options that minimize planning time is NP-Hard.
|
For fun, I'm a big fan of basketball, snowboarding, games, and music (I play guitar/piano/violin and mostly listen to progressive metal). I now live in London, UK, with my wife Elizabeth and our dog Barley.
Always up for a chat -- shoot me an email if you'd like to discuss anything!