Abstract
Here we present a more interpretable and flexible approach for reconstructing the contents of perception, attention, and memory from neuroimaging data. Our enhanced inverted encoding model (eIEM) incorporates methodological improvements including proper accounting of population-level tuning functions and a trial-by-trial prediction error-based metric where reconstruction quality is measured in meaningful units. Improved flexibility is further gained via eIEM’s novel goodness-of-fit feature: for trial-by-trial reconstructions, goodness-of-fits are obtained independently (non-circularly) to prediction error and can be applied to any IEM procedure or decoding metric, resulting in improved reconstruction quality and brain-behavior correlations. We validate eIEM from methodological principles, simulated neuroimaging datasets, and three pre-existing fMRI datasets spanning perception, attention, and working memory. Notably, eIEM is easy to apply and broadly accessible – our Python package (https://pypi.org/project/inverted-encoding) implements eIEM in one line of code – and is easily modifiable to compare performance metrics and/or scale up to more complex models.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
Overall updates pending submission to peer-reviewed journal.