Abstract
We explore a method for reconstructing visual stimuli from brain activity. Using large databases of natural images we trained a deep convolutional generative adversarial network capable of generating gray scale photos, similar to stimUli presented during two functional magnetic resonance imaging experiments. Using a linear model we learned to predict the generative model’s latent space from measured brain activity. The objective was to create an image similar to the presented stimulus image through the previously trained generator. Using this approach we were able to reconstruct structural and some semantic features of a proportion of the natural images sets. A behavioral test showed that subjects were capable of identifying a reconstruction of the original stimuhis in 67.2% and 66.4% of the cases in a pairwise comparison for the two natural image datasets respectively. our approach does not require end-to-end training of a large generative model on limited neuroimaging data. Rapid advances in generative modeling promise further improvements in reconstruction performance.