Abstract
Theories about the neural control of movement are largely based on movement-sensing devices that capture the dynamics of predefined anatomical landmarks. However, neuromuscular interfaces such as surface electromyography (sEMG) can potentially overcome the limitations of these technologies by directly sensing the motor commands transmitted to the muscles. This allows for the continuous, real-time prediction of kinematics and kinetics without being limited by the biological and physical constraints that affect motion-based technologies. In this work, we present a deep learning method that can decode and map the electrophysiological activity of the forearm muscles into movements of the human hand. We recorded the kinematics and kinetics of the human hand during a wide range of grasping and individual digit movements covering more than 20 degrees of freedom of the hand at slow (0.5 Hz) and fast (1.5 Hz) movement speeds in healthy participants. The input of the model consists of three-hundred EMG sensors placed only on the extrinsic hand muscles. We demonstrate that our neural network can accurately predict the kinematics and contact forces of the hand even during unseen movements and with simulated real-time resolution. By examining the latent space of the network, we find evidence that it has learned the underlying anatomical and neural features of the sEMG that drive all hand motor behaviours.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
New analysis including the latent components learned by the AI and the differences across task and movement speeds.