Abstract
Extracting the relationship between high-dimensional recordings of neural activity and complex behav- ior is a ubiquitous problem in systems neuroscience. Toward this goal, encoding and decoding models attempt to infer the conditional distribution of neural activity given behavior and vice versa, while dimensionality reduc- tion techniques aim to extract interpretable low-dimensional representations. Variational autoencoders (VAEs) are flexible deep-learning models commonly used to infer low-dimensional embeddings of neural or behavioral data. However, it is challenging for VAEs to accurately model arbitrary conditional distributions, such as those encountered in neural encoding and decoding, and even more so simultaneously. Here, we present a VAE-based approach for accurately calculating such conditional distributions. We validate our approach on a task with known ground truth and demonstrate the applicability to high-dimensional behavioral time series by retrieving the condi- tional distributions over masked body parts of walking flies. Finally, we probabilistically decode motor trajectories from neural population activity in a monkey reach task and query the same VAE for the encoding distribution of neural activity given behavior. Our approach provides a unifying perspective on joint dimensionality reduction and learning conditional distributions of neural and behavioral data, which will allow for scaling common analyses in neuroscience to today’s high-dimensional multi-modal datasets.
Competing Interest Statement
The authors have declared no competing interest.