User profiles for Andrew K. Lampinen

Andrew Kyle Lampinen

Research Scientist, DeepMind
Verified email at google.com
Cited by 2189

Can language models learn from explanations in context?

AK Lampinen, I Dasgupta, SCY Chan… - arXiv preprint arXiv …, 2022 - arxiv.org
Language Models (LMs) can perform new tasks by adapting to a few in-context examples.
For humans, explanations that connect examples to task principles can improve learning. We …

Language models show human-like content effects on reasoning

I Dasgupta, AK Lampinen, SCY Chan… - arXiv preprint arXiv …, 2022 - arxiv.org
Abstract reasoning is a key ability for an intelligent system. Large language models (LMs)
achieve above-chance performance on abstract reasoning tasks, but exhibit many …

An analytic theory of generalization dynamics and transfer learning in deep linear networks

AK Lampinen, S Ganguli - arXiv preprint arXiv:1809.10374, 2018 - arxiv.org
Much attention has been devoted recently to the generalization puzzle in deep learning:
large, deep networks can generalize well, but existing theories bounding generalization error …

Integration of new information in memory: new insights from a complementary learning systems perspective

…, BL McNaughton, AK Lampinen - … Transactions of the …, 2020 - royalsocietypublishing.org
According to complementary learning systems theory, integrating new memories into the
neocortex of the brain without interfering with what is already known depends on a gradual …

Transformers generalize differently from information stored in context vs in weights

…, I Dasgupta, J Kim, D Kumaran, AK Lampinen… - arXiv preprint arXiv …, 2022 - arxiv.org
Transformer models can use two fundamentally different kinds of information: information
stored in weights during training, and information provided ``in-context'' at inference time. In …

Tell me why! explanations support learning relational and causal structure

AK Lampinen, N Roy, I Dasgupta… - International …, 2022 - proceedings.mlr.press
Inferring the abstract relational and causal structure of the world is a major challenge for
reinforcement-learning (RL) agents. For humans, language {—} particularly in the form of …

Automated curricula through setter-solver interactions

S Racaniere, AK Lampinen, A Santoro… - arXiv preprint arXiv …, 2019 - arxiv.org
Reinforcement learning algorithms use correlations between policies and rewards to improve
agent performance. But in dynamic or sparsely rewarding environments these correlations …

Getting aligned on representational alignment

…, TP O'Connell, T Unterthiner, AK Lampinen… - arXiv preprint arXiv …, 2023 - arxiv.org
Biological and artificial information processing systems form representations of the world that
they can use to categorize, reason, plan, navigate, and make decisions. To what extent do …

Transforming task representations to perform novel tasks

AK Lampinen, JL McClelland - Proceedings of the National …, 2020 - National Acad Sciences
An important aspect of intelligence is the ability to adapt to a novel task without any direct
experience (zero shot), based on its relationship to previous tasks. Humans can exhibit this …

SODA: Bottleneck Diffusion Models for Representation Learning

…, D Zoran, M Malinowski, AK Lampinen… - arXiv preprint arXiv …, 2023 - arxiv.org
We introduce SODA, a self-supervised diffusion model, designed for representation learning.
The model incorporates an image encoder, which distills a source view into a compact …