PT - JOURNAL ARTICLE AU - Venkatesh Elango AU - Aashish N Patel AU - Kai J Miller AU - Vikash Gilja TI - Sequence Transfer Learning for Neural Decoding AID - 10.1101/210732 DP - 2017 Jan 01 TA - bioRxiv PG - 210732 4099 - http://biorxiv.org/content/early/2017/12/23/210732.short 4100 - http://biorxiv.org/content/early/2017/12/23/210732.full AB - A fundamental challenge in designing brain-computer interfaces (BCIs) is decoding behavior from time-varying neural oscillations. in typical applications, decoders are constructed for individual subjects and with limited data leading to restrictions on the types of models that can be utilized. currently, the best performing decoders are typically linear models capable of utilizing rigid timing constraints with limited training data. Here we demonstrate the use of Long Short-Term Memory (LSTM) networks to take advantage of the temporal information present in sequential neural data collected from subjects implanted with electrocorticographic (ECoG) electrode arrays performing a finger flexion task. our constructed models are capable of achieving accuracies that are comparable to existing techniques while also being robust to variation in sample data size. Moreover, we utilize the LSTM networks and an affine transformation layer to construct a novel architecture for transfer learning. We demonstrate that in scenarios where only the affine transform is learned for a new subject, it is possible to achieve results comparable to existing state-of-the-art techniques. The notable advantage is the increased stability of the model during training on novel subjects. Relaxing the constraint of only training the affine transformation, we establish our model as capable of exceeding performance of current models across all training data sizes. Overall, this work demonstrates that LSTMS are a versatile model that can accurately capture temporal patterns in neural data and can provide a foundation for transfer learning in neural decoding.