Andrew (Drew) Jaegle

I'm a Research Scientist at DeepMind in London, where I work on artificial intelligence with an amazing group of collaborators.

Before DeepMind, I worked on computer vision and computational neuroscience at the University of Pennsylvania, where I did a PhD in the GRASP Lab with Kostas Daniilidis and a postdoc with Nicole Rust. I also spent time in the labs of Diego Contreras and Maria Geffen and with the folks in the Computational Neuroscience Initiative. During my PhD, I did research internships at the Max-Planck Institute for Intelligent Systems, where I worked with Michael Black and Javier Romero, and at DeepMind, where I worked with Greg Wayne. Prior to that, I worked on consciousness and attention at the City University of New York with Tony Ro. I have a BA from Texas A&M University, where I studied philosophy, music composition, and mathematics.

Email  /  CV  /  Google Scholar  /  Twitter  /  LinkedIn

profile photo

Research

My research is focused on topics in AI including vision and sensory reasoning, imitation, and unsupervised learning. I'm broadly interested in perception and behavior, and I also like to think about and work on philosophy and music.

project image

Perceiver IO: A General Architecture for Structured Inputs & Outputs


Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matt Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira
arXiv, 2021
paper / pdf / code

A Perceiver architecture that handles arbitrary inputs and outputs: SoTA on Sintel optical flow, matches a strong BERT baseline but without tokenization, simultaneous multi-task GLUE fine-tuning, better/faster results on ImageNet without 2D assumptions, and strong results on StarCraft II & multimodal Kinetics autoencoding.

project image

Which priors matter? Benchmarking models for learning latent dynamics


Alex Botev, Andrew Jaegle, Peter Wirnsberger, Daniel Hennes, Irina Higgins
Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks, 2021
paper

A new benchmark for models of physical dynamics from images with 17 datasets and strong baselines. Code and data coming soon!

project image

Imitation by Predicting Observations


Andrew Jaegle, Yury Sulsky, Arun Ahuja, Jake Bruce, Rob Fergus, Greg Wayne
International Conference on Machine Learning (ICML), 2021
paper / pdf

Imitation from observations by sequence modeling using a framework for robust, non-adversarial inverse reinforcement learning.

project image

Perceiver: General Perception with Iterative Attention


Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, João Carreira
International Conference on Machine Learning (ICML), 2021
paper / pdf

A scalable, domain-agnostic attentional architecture that works on images, audio, point clouds, video, multimodal data and more.

project image

Temporal Difference and Return Optimism in Cooperative Multi-Agent Reinforcement Learning


Mark Rowland, Shayegan Omidshafiei, Daniel Hennes, Will Dabney, Andrew Jaegle, Paul Muller, Julien Pérolat, Karl Tuyls
Journal of Artificial Intelligence Research (JAIR), 2021
paper / pdf

How to use optimism to encourage agents to learn in the presence of other, suboptimal agents in multi-agent reinforcement learning.

project image

Game Plan: What AI can do for Football, and What Football can do for AI


Karl Tuyls*, Shayegan Omidshafiei*, Paul Muller, Zhe Wang, Jerome Connor, Daniel Hennes, Ian Graham, William Spearman, Tim Waskett, Dafydd Steele, Pauline Luc, Adrià Recasens, Alexandre Galashov, Gregory Thornton, Romuald Elie, Pablo Sprechmann, Pol Moreno, Kris Cao, Marta Garnelo, Praneet Dutta, Michal Valko, Nicolas Heess, Alex Bridgland, Julien Pérolat, Bart De Vylder, Ali Eslami, Mark Rowland, Andrew Jaegle, Rémi Munos, Trevor Back, Razia Ahamed, Simon Bouton, Nathalie Beauguerlange, Jackson Broshear, Thore Graepel, Demis Hassabis
Journal of Artificial Intelligence Research (JAIR), 2021
paper / pdf

An overview of the challenges in football analytics and how tools from AI (including computer vision, game theory, and statistical learning) can give new and better insights.

project image

Physically embedded planning problems: New challenges for reinforcement learning


Mehdi Mirza*, Andrew Jaegle*, Jonathan Hunt*, Arthur Guez*, Saran Tunyasuvunakool, Alistair Muldal, Théophane Weber, Peter Karkus, Sébastien Racanière, Lars Buesing, Timothy Lillicrap, Nicolas Heess
arXiv, 2020
paper / pdf / video / code

A suite of difficult Mujoco domains that pair high-level, abstract structure with physical embodiment. These domains are very challenging for current black-box RL methods, but can be solved by building in high-level structure: we challenge you to bridge this gap!

project image

Beyond Tabula-Rasa: a Modular Reinforcement Learning Approach for Physically Embedded 3D Sokoban


Peter Karkus, Mehdi Mirza, Arthur Guez, Andrew Jaegle, Timothy Lillicrap, Lars Buesing, Nicolas Heess, Théophane Weber
ICLR workshop: Beyond tabula rasa in reinforcement learning, 2020
paper / pdf

We revisit the classic sense-plan-act structure to build a modular reinforcement learning agent that can efficiently solve physical tasks with abstract structure.

project image

Hamiltonian generative networks


Peter Toth*, Danilo Rezende*, Andrew Jaegle, Sébastien Racanière, Alex Botev, Irina Higgins
International Conference on Learning Representations (ICLR), 2020
paper / pdf

Methods for learning the structure of physical systems from video without supervision using generative models built with ideas from Hamiltonian mechanics.

project image

Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning


Karl Pertsch*, Oleh Rybkin*, Jingyung Yang, Shenghao Zhou, Kosta Derpanis, Kostas Daniilidis, Joseph Lim, Andrew Jaegle
Learning for Dynamics and Control (L4DC), 2020
paper / project page / pdf

A model that first predicts important moments in the future and then uses these to infer the missing details, without supervision.

project image

Visual novelty, curiosity, and intrinsic reward in machine learning and the brain


Andrew Jaegle, Vahid Mehrpour, Nicole Rust
Current Opinion in Neurobiology, 2019
paper / pdf

Connections between two literatures with important, but largely unappreciated, connections: intrinsic motivation in AI and visual novelty in neuroscience and psychology.

project image

Population response magnitude variation in inferotemporal cortex predicts image memorability


Andrew Jaegle*, Vahid Mehrpour*, Yalda Mohsenzadeh*, Travis Meyer, Aude Oliva, Nicole Rust
eLife, 2019
paper / pdf

A simple signature of an image’s memorability – how easy it is for people to remember, compared to other images – that is shared by the brain and convolutional neural networks.

project image

Codes, functions, and causes: A critique of Brette's conceptual analysis of coding


David Barack and Andrew Jaegle
Behavioral and Brain Sciences, 2019
paper / pdf

An argument for the validity of functional decomposition and approximation in causal analyses of complex systems like brains and deep networks (also tents). A response to Brette 2018 (and see his response to our response).

project image

Learning what you can do before doing anything


Oleh Rybkin*, Karl Pertsch*, Kosta Derpanis, Kostas Daniilidis, Andrew Jaegle
International Conference on Learning Representations (ICLR), 2019
paper / project page / pdf / video

An unsupervised method that captures the latent structure of behavior from video using compositional video prediction models.

project image

Predicting the Future with Transformational States


Andrew Jaegle, Oleh Rybkin, Kosta Derpanis, Kostas Daniilidis
arXiv, 2018
paper / pdf

Methods for video prediction by reasoning about changes to static content in a learned latent space. Check out Oleh’s blog post discussing some of the problems with using standard reconstruction losses.

project image

Understanding image motion with group representations


Andrew Jaegle*, Stephen Phillips*, Daphne Ippolito, Kostas Daniilidis
International Conference on Learning Representations (ICLR), 2018
paper / pdf

A simple method for unsupervised learning of motion in videos using ideas from group theory.

project image

Fast, robust, continuous monocular egomotion computation


Andrew Jaegle*, Stephen Phillips*, Kostas Daniilidis
International Conference on Robotics and Automation (ICRA), 2016
paper / pdf / code

Robust methods for estimating camera motion in real-world video by reasoning about the contribution of each element of an optic flow field to the global motion estimate.

project image

Emergence of invariant representation of vocalizations in the auditory cortex


Isaac Carruthers, Diego Laplagne, Andrew Jaegle, John Briguglio, Laetitia Mwilambwe-Thilobo, Ryan Natan, Maria Geffen
Journal of Neurophysiology, 2015
paper / pdf

How nuisance transformations – changes to a sound that don’t change its meaning, like saying the same word faster or slower – are tuned out as signals moves from an earlier to a later stage of auditory cortex.

project image

Second order receptive field properties of simple and complex cells support a new standard model


Madineh Sedigh-Sarvestani, Iván Fernández-Lamo, Andrew Jaegle, Morgan Taylor
Journal of Neuroscience, 2014
paper / pdf

A critical review of Fournier et al’s work characterizing the structure of synaptic receptive fields in visual cortex.

project image

Direct control of visual perception with phase-specific modulation of posterior parietal cortex


Andrew Jaegle, Tony Ro
Journal of Cognitive Neuroscience, 2014
paper / pdf

Evidence for a causal role of the phase of alpha-band (10 Hz) oscillations in visual perception.


This guy does good work.