PT - JOURNAL ARTICLE AU - Pablo Ortega AU - Cédric Colas AU - Aldo Faisal TI - Convolutional neural network, personalised, closed-loop Brain-Computer Interfaces for multi-way control mode switching in real-time AID - 10.1101/256701 DP - 2018 Jan 01 TA - bioRxiv PG - 256701 4099 - http://biorxiv.org/content/early/2018/03/06/256701.short 4100 - http://biorxiv.org/content/early/2018/03/06/256701.full AB - Exoskeletons and robotic devices are for many motor disabled people the only way to interact with their envi-ronment. Our lab previously developed a gaze guided assistive robotic system for grasping. It is well known that the same natural task can require different interactions described by different dynamical systems that would require different robotic controllers and their selection by the user in a self paced way. Therefore, we investigated different ways to achieve transitions between multiple states, finding that eye blinks were the most reliable to transition from ‘off’ to ‘control’ modes (binary classification) compared to voice and electromyography. In this paper be expanded on this work by investigating brain signals as sources for control mode switching. We developed a Brain Computer Interface (BCI) that allows users to switch between four control modes in self paced way in real time. Since the system is devised to be used in domestic environments in a user friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (ConvNets), known by their capability to find the optimal features for a classification task, which we hypothesised would add flexibility to the system in terms of which mental activities the user could perform to control it. We tested our system using the Cybathlon BrainRunners computer game, which represents all the challenges inherent to real time control. Our preliminary results show that an efficient architecture (SmallNet) composed by a convolutional layer, a fully connected layer and a sigmoid classification layer, is able to classify 4 mental activities that the user chose to perform. For his preferred mental activities, we run and validated the system online and retrained the system using online collected EEG data. We achieved 47, 6% accuracy in online operation in the 4-way classification task. In particular we found that models trained with online collected data predicted better the behaviour of the system in real time suggesting, as a side note, that similar (ConvNets based) offline classifying methods present in literature might find a decay in performance when applied online. To the best of our knowledge this is the first time such an architecture is tested in an online operation task. While compared to our previous method relying on blinks with this one we reduced in less than half (1.6 times) the accuracy but increased by 2 the amount of states among which we can transit, bringing the opportunity for finer control of specific subtasks composing natural grasping in a self paced way.