Abstract
How we process ongoing experiences is shaped by our personal history, current needs, and future goals. Consequently, brain regions involved in generating these subjective appraisals, such as the vmPFC, often appear to be heterogeneous across individuals even in response to the same external information. To elucidate the role of the vmPFC in processing our ongoing experiences, we developed a computational framework and analysis pipeline to characterize the spatiotemporal dynamics of individual vmPFC responses as participants viewed a 45-minute television drama. Through a combination of functional magnetic resonance imaging, facial expression tracking, and self-reported emotional experiences across four studies, our data suggest that the vmPFC slowly transitions through a series of discretized states that broadly map onto affective experiences. Although these transitions typically occur at idiosyncratic times across people, participants exhibited a marked increase in state alignment during high affectively valenced events in the show. Our work suggests that the vmPFC ascribes affective meaning to our ongoing experiences.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
1) Added Group HMM Analyses. We have added a new Group-HMM that is fit to all participants data, treating each participant as a separate sequence. We believe this model complements our original analysis, and cleans up some of the noise associated with subject heterogeneity and simplifies our analysis as states are necessarily aligned across participants. We retain our original method to demonstrate the spatial similarity of the participant state patterns, but use the new Group-HMM for most of the other analyses. 2) Added cross-episode validation of Group-HMM. We have also fit the Group-HMM to Study 1 Episode 2 data and assess the generalizability of the patterns across episodes. Despite different narrative content, we see evidence that three of four states generalize across both episodes. 3) Added Inter-Experiment Latent Component Model. We have added an Inter-Experiment Shared Response Model, which allows us to identify patterns of facial expression and subjective feeling that project into a latent component that is shared with the vmPFC state concordances from Studies 1 & 2. We believe this analysis provides a more direct mapping of how latent affective components are manifested in vmPFC state concordances, face expressions, and subjective feelings. 4) Added additional feature mapping to HMM State Concordance. We provide additional analyses mapping the Group-HMM state concordances onto visual and affective features, which we believe helps to interpret what types of information the states may be processing. 5) Added additional ROI (PCC). We have added the Posterior Cingulate Cortex (PCC) as an additional control region for all analyses reported in the paper. 6) Deeper dive into autocorrelation. We have slightly updated our pipeline for computing autocorrelation and have included additional supplementary analyses exploring voxel-level autocorrelations. We now find that both voxels and spatial patterns in the vmPFC exhibit a longer autocorrelation compared to V1. In addition, we find that some of this variance can be explained by susceptibility artifact by mapping voxel autocorrelations onto signal-to-noise ratio maps. 7) Univariate contrasts based on vmPFC state changes. To provide additional evidence that the vmPFC states are processing distinct types of information, we have now included additional analyses in which we use participant-specific vmPFC state changes to identify differential voxel activations across the brain. These analyses indicate that three out of the four vmPFC states are associated with unique states reflected in whole-brain activity and highlight group level endogenous processing that is temporally offset from the stimulus.