Abstract
We introduce a robust video-based method for estimating the positions of fNIRS optodes on the scalp. The method is fast, requires no special hardware, and is intuitive to use with developmental populations. Co-registration is a crucial step for reliable analysis of FNIRS data, yet it still remains an open problem when considering these populations. Existing methods pose motion constraints, require expert annotation, or are only applicable in laboratory conditions. Using novel computer-vision technologies, we implement a fully-automatic appearance-based method that estimates the registration parameters of a mounted cap to the scalp from a raw video of the subject. We validate our method on 10 adult subjects and demonstrate its usability with infants. We compare our method to the standard 3D digitizer, and to other photogrammetry based approaches. We show our method achieves comparable accuracy to current appearance-based methods, while being orders of magnitude faster. Our fast registration facilitates more spatially precise fNIRS analysis with developmental populations even in unconventional environments. The method is implemented as a open-source toolbox at https://github.com/yoterel/STORM-Net.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
jaffedax{at}gmail.com
yaaray{at}tauex.tau.ac.il
amberman{at}tauex.tau.ac.il
Updated with new findings and method.