Abstract
Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., high-level controllers), we designed an environment recognition system using computer vision and deep learning. We collected over 5.6 million images of indoor and outdoor real-world walking environments using a wearable camera system, of which ~923,000 images were annotated using a 12-class hierarchical labelling architecture (called the ExoNet database). We then trained and tested the EfficientNetB0 convolutional neural network, designed for efficiency using neural architecture search, to predict the different walking environments. Our environment recognition system achieved ~73% image classification accuracy. While these preliminary results benchmark Efficient-NetB0 on the ExoNet database, further research is needed to compare different image classification algorithms to develop an accurate and real-time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
* Research supported by the Natural Sciences and Engineering Research Council of Canada (NSERC); the Waterloo Engineering Excellence PhD Fellowship; John McPhee’s Tier I Canada Research Chair in Biomechatronic System Dynamics; and Alexander Wong’s Tier II Canada Research Chair in Artificial Intelligence and Medical Imaging.
(email: blaschow{at}uwaterloo.ca).
(email: wmcnally{at}uwaterloo.ca).
(email: alexander.wong{at}uwaterloo.ca).
(email: mcphee{at}uwaterloo.ca).