Andrew (Drew) Jaegle

I'm a Research Scientist at DeepMind in London, where I work on artificial intelligence with an amazing group of collaborators.

Before DeepMind, I worked on computer vision and computational neuroscience at the University of Pennsylvania, where I did a PhD in the GRASP Lab with Kostas Daniilidis and a postdoc with Nicole Rust. I also spent time in the labs of Diego Contreras and Maria Geffen and with the folks in the Computational Neuroscience Initiative. During my PhD, I did research internships at the Max-Planck Institute for Intelligent Systems, where I worked with Michael Black and Javier Romero, and at DeepMind, where I worked with Greg Wayne. Prior to that, I worked on consciousness and attention at the City University of New York with Tony Ro. I have a BA from Texas A&M University, where I studied philosophy, music composition, and mathematics.

Email  /  CV  /  Google Scholar  /  Twitter  /  LinkedIn

profile photo


My research is focused on topics in AI including vision and sensory reasoning, imitation, and unsupervised learning. I'm broadly interested in perception and behavior, and I also like to think about and work on philosophy and music.

project image

Game Plan: What AI can do for Football, and What Football can do for AI

Karl Tuyls*, Shayegan Omidshafiei*, Paul Muller, Zhe Wang, Jerome Connor, Daniel Hennes, Ian Graham, William Spearman, Tim Waskett, Dafydd Steele, Pauline Luc, Adrià Recasens, Alexandre Galashov, Gregory Thornton, Romuald Elie, Pablo Sprechmann, Pol Moreno, Kris Cao, Marta Garnelo, Praneet Dutta, Michal Valko, Nicolas Heess, Alex Bridgland, Julien Pérolat, Bart De Vylder, Ali Eslami, Mark Rowland, Andrew Jaegle, Rémi Munos, Trevor Back, Razia Ahamed, Simon Bouton, Nathalie Beauguerlange, Jackson Broshear, Thore Graepel, Demis Hassabis
arXiv, 2020
paper / pdf

We give an overview of the challenges in football analytics and how tools from AI (including computer vision, game theory, and statistical learning) can give new and better insights.

project image

Physically embedded planning problems: New challenges for reinforcement learning

Mehdi Mirza*, Andrew Jaegle*, Jonathan Hunt*, Arthur Guez*, Saran Tunyasuvunakool, Alistair Muldal, Théophane Weber, Peter Karkus, Sébastien Racanière, Lars Buesing, Timothy Lillicrap, Nicolas Heess
arXiv, 2020
paper / pdf / video / code

We introduce a suite of difficult Mujoco domains that pair high-level, abstract structure with physical embodiment. These domains are very challenging for current black-box RL methods, but can be solved by building in high-level structure: we challenge you to bridge this gap!

project image

Beyond Tabula-Rasa: a Modular Reinforcement Learning Approach for Physically Embedded 3D Sokoban

Peter Karkus, Mehdi Mirza, Arthur Guez, Andrew Jaegle, Timothy Lillicrap, Lars Buesing, Nicolas Heess, Théophane Weber
ICLR workshop: Beyond tabula rasa in reinforcement learning, 2020
paper / pdf

We revisit the classic sense-plan-act structure to build a modular reinforcement learning agent that can efficiently solve physical tasks with abstract structure.

project image

Hamiltonian generative networks

Peter Toth*, Danilo Rezende*, Andrew Jaegle, Sébastien Racanière, Alex Botev, Irina Higgins
International Conference on Learning Representations (ICLR), 2020
paper / pdf

We develop methods for learning the structure of physical systems from video without supervision using generative models built with ideas from Hamiltonian mechanics.

project image

Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning

Karl Pertsch*, Oleh Rybkin*, Jingyung Yang, Shenghao Zhou, Kosta Derpanis, Kostas Daniilidis, Joseph Lim, Andrew Jaegle
Learning for Dynamics and Control (L4DC), 2020
paper / project page / pdf

We develop a model that first predicts important moments in the future and then uses these to infer the missing details, without supervision.

project image

Visual novelty, curiosity, and intrinsic reward in machine learning and the brain

Andrew Jaegle, Vahid Mehrpour, Nicole Rust
Current Opinion in Neurobiology, 2019
paper / pdf

We draw connections between two literatures with important, but largely unappreciated, connections: intrinsic motivation in AI and visual novelty in neuroscience and psychology.

project image

Population response magnitude variation in inferotemporal cortex predicts image memorability

Andrew Jaegle*, Vahid Mehrpour*, Yalda Mohsenzadeh*, Travis Meyer, Aude Oliva, Nicole Rust
eLife, 2019
paper / pdf

We identify a simple signature of an image’s memorability – how easy it is for people to remember, compared to other images – that is shared by the brain and convolutional neural networks.

project image

Codes, functions, and causes: A critique of Brette's conceptual analysis of coding

David Barack and Andrew Jaegle
Behavioral and Brain Sciences, 2019
paper / pdf

We argue for the validity of functional decomposition and approximation in causal analyses of complex systems like brains and deep networks (also tents). A response to Brette 2018 (and see his response to our response).

project image

Learning what you can do before doing anything

Oleh Rybkin*, Karl Pertsch*, Kosta Derpanis, Kostas Daniilidis, Andrew Jaegle
International Conference on Learning Representations (ICLR), 2019
paper / project page / pdf / video

We develop an unsupervised method that captures the latent structure of behavior from video using compositional video prediction models.

project image

Predicting the Future with Transformational States

Andrew Jaegle, Oleh Rybkin, Kosta Derpanis, Kostas Daniilidis
arXiv, 2018
paper / pdf

We develop methods for video prediction by reasoning about changes to static content in a learned latent space. Check out Oleh’s blog post discussing some of the problems with using standard reconstruction losses.

project image

Understanding image motion with group representations

Andrew Jaegle*, Stephen Phillips*, Daphne Ippolito, Kostas Daniilidis
International Conference on Learning Representations (ICLR), 2018
paper / pdf

We develop simple methods for unsupervised learning of motion in videos using ideas from group theory.

project image

Fast, robust, continuous monocular egomotion computation

Andrew Jaegle*, Stephen Phillips*, Kostas Daniilidis
International Conference on Robotics and Automation (ICRA), 2016
paper / pdf / code

We develop robust methods for estimating camera motion in real-world video by reasoning about the contribution of each element of an optic flow field to the global motion estimate.

project image

Emergence of invariant representation of vocalizations in the auditory cortex

Isaac Carruthers, Diego Laplagne, Andrew Jaegle, John Briguglio, Laetitia Mwilambwe-Thilobo, Ryan Natan, Maria Geffen
Journal of Neurophysiology, 2015
paper / pdf

We characterize how nuisance transformations – changes to a sound that don’t change its meaning, like saying the same word faster or slower – are tuned out as signals moves from an earlier to a later stage of auditory cortex.

project image

Second order receptive field properties of simple and complex cells support a new standard model

Madineh Sedigh-Sarvestani, Iván Fernández-Lamo, Andrew Jaegle, Morgan Taylor
Journal of Neuroscience, 2014
paper / pdf

We critically review Fournier et al’s work characterizing the structure of synaptic receptive fields in visual cortex.

project image

Direct control of visual perception with phase-specific modulation of posterior parietal cortex

Andrew Jaegle, Tony Ro
Journal of Cognitive Neuroscience, 2014
paper / pdf

We provide evidence for a causal role of the phase of alpha-band (10 Hz) oscillations in visual perception.

This guy does good work.