TY - JOUR T1 - A large-scale neural network training framework for generalized estimation of single-trial population dynamics JF - bioRxiv DO - 10.1101/2021.01.13.426570 SP - 2021.01.13.426570 AU - Mohammad Reza Keshtkaran AU - Andrew R. Sedler AU - Raeed H. Chowdhury AU - Raghav Tandon AU - Diya Basrai AU - Sarah L. Nguyen AU - Hansem Sohn AU - Mehrdad Jazayeri AU - Lee E. Miller AU - Chethan Pandarinath Y1 - 2021/01/01 UR - http://biorxiv.org/content/early/2021/01/15/2021.01.13.426570.abstract N2 - Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics. However, the sheer volume of data and its dynamical complexity are critical barriers to uncovering and interpreting these dynamics. Deep learning methods are a promising approach due to their ability to uncover meaningful relationships from large, complex, and noisy datasets. When applied to high-D spiking data from motor cortex (M1) during stereotyped behaviors, they offer improvements in the ability to uncover dynamics and their relation to subjects’ behaviors on a millisecond timescale. However, applying such methods to less-structured behaviors, or in brain areas that are not well-modeled by autonomous dynamics, is far more challenging, because deep learning methods often require careful hand-tuning of complex model hyperparameters (HPs). Here we demonstrate AutoLFADS, a large-scale, automated model-tuning framework that can characterize dynamics in diverse brain areas without regard to behavior. AutoLFADS uses distributed computing to train dozens of models simultaneously while using evolutionary algorithms to tune HPs in a completely unsupervised way. This enables accurate inference of dynamics out-of-the-box on a variety of datasets, including data from M1 during stereotyped and free-paced reaching, somatosensory cortex during reaching with perturbations, and frontal cortex during cognitive timing tasks. We present a cloud software package and comprehensive tutorials that enable new users to apply the method without needing dedicated computing resources.Competing Interest StatementThe authors have declared no competing interest. ER -