User profiles for Andrew K. Lampinen
Andrew Kyle LampinenResearch Scientist, DeepMind Verified email at google.com Cited by 2189 |
Can language models learn from explanations in context?
Language Models (LMs) can perform new tasks by adapting to a few in-context examples.
For humans, explanations that connect examples to task principles can improve learning. We …
For humans, explanations that connect examples to task principles can improve learning. We …
Language models show human-like content effects on reasoning
Abstract reasoning is a key ability for an intelligent system. Large language models (LMs)
achieve above-chance performance on abstract reasoning tasks, but exhibit many …
achieve above-chance performance on abstract reasoning tasks, but exhibit many …
An analytic theory of generalization dynamics and transfer learning in deep linear networks
AK Lampinen, S Ganguli - arXiv preprint arXiv:1809.10374, 2018 - arxiv.org
Much attention has been devoted recently to the generalization puzzle in deep learning:
large, deep networks can generalize well, but existing theories bounding generalization error …
large, deep networks can generalize well, but existing theories bounding generalization error …
Integration of new information in memory: new insights from a complementary learning systems perspective
…, BL McNaughton, AK Lampinen - … Transactions of the …, 2020 - royalsocietypublishing.org
According to complementary learning systems theory, integrating new memories into the
neocortex of the brain without interfering with what is already known depends on a gradual …
neocortex of the brain without interfering with what is already known depends on a gradual …
Transformers generalize differently from information stored in context vs in weights
Transformer models can use two fundamentally different kinds of information: information
stored in weights during training, and information provided ``in-context'' at inference time. In …
stored in weights during training, and information provided ``in-context'' at inference time. In …
Tell me why! explanations support learning relational and causal structure
Inferring the abstract relational and causal structure of the world is a major challenge for
reinforcement-learning (RL) agents. For humans, language {—} particularly in the form of …
reinforcement-learning (RL) agents. For humans, language {—} particularly in the form of …
Automated curricula through setter-solver interactions
Reinforcement learning algorithms use correlations between policies and rewards to improve
agent performance. But in dynamic or sparsely rewarding environments these correlations …
agent performance. But in dynamic or sparsely rewarding environments these correlations …
Getting aligned on representational alignment
Biological and artificial information processing systems form representations of the world that
they can use to categorize, reason, plan, navigate, and make decisions. To what extent do …
they can use to categorize, reason, plan, navigate, and make decisions. To what extent do …
Transforming task representations to perform novel tasks
AK Lampinen, JL McClelland - Proceedings of the National …, 2020 - National Acad Sciences
An important aspect of intelligence is the ability to adapt to a novel task without any direct
experience (zero shot), based on its relationship to previous tasks. Humans can exhibit this …
experience (zero shot), based on its relationship to previous tasks. Humans can exhibit this …
SODA: Bottleneck Diffusion Models for Representation Learning
We introduce SODA, a self-supervised diffusion model, designed for representation learning.
The model incorporates an image encoder, which distills a source view into a compact …
The model incorporates an image encoder, which distills a source view into a compact …