TY - JOUR T1 - X-MyoNET: Biometric Identification using Deep Processing of Transient Surface Electromyography JF - bioRxiv DO - 10.1101/2021.11.30.470688 SP - 2021.11.30.470688 AU - Qin Hu AU - Alireza Sarmadi AU - Paras Gulati AU - Prashanth Krishnamurthy AU - Farshad Khorrami AU - S. Farokh Atashzar Y1 - 2021/01/01 UR - http://biorxiv.org/content/early/2021/12/02/2021.11.30.470688.abstract N2 - The rapid development of the Internet and various applications such as the Internet of Medical Things (IoMT) has raised substantial concerns about personal information security. Conventional methods (e.g., passwords) and classic biological features (e.g., fingerprints) are security deficient because of potential information leakage and hacking. Biometrics that express behavioral features suggest a robust approach to achieving information security because of the corresponding uniqueness and complexity. In this paper, we consider identifying human subjects based on their transient neurophysiological signature captured using multichannel upper-limb surface electromyography (sEMG). An explainable artificial intelligence (XAI) approach is proposed to process the internal dynamics of temporal sEMG signals. We propose and prove the suitability of “transient sEMG” as a biomarker that can identify individuals. For this, we utilize the Gradient-weighted Class Activation Mapping (Grad-CAM) analysis to explain the network’s attention. The outcome not only decodes and makes the unique neurophysiological pattern (i.e., motor unit recruitment during the transient phase of contraction) associated with each individual visualizable but also generates an optimizing two-dimensional (2D) spectrotemporal mask used to significantly reduce the size of the model and the trainable parameters. The resulting mask selectively and systematically samples the spectrotemporal characteristics of the users’ neurophysiological responses, discarding 40% of the input space while securing the accuracy of about 74% with much shallower neural network architecture. In the systematic comparative study, we find that our proposed model outperforms several state-of-the-art algorithms. For broader impacts, we anticipate our design of a compact, practical, interpretable, and robust identification system that requires only a minimal number of gestures and sensors (only 7% of the entire data set) to be a starting point for small and portable identification hardware.Competing Interest StatementThe authors have declared no competing interest. ER -