Title: Love and friendship at first sight: Rapid neural representations of personal relevance

Faces are a primary source of social information, but little is known about the sequence of neural processing of personally relevant faces, such as those of our loved ones. We applied representational similarity analyses to EEG-fMRI measurement of neural responses to faces of personal relevance to participants their romantic partner and a friend compared to a stranger. Faces expressed fear, happiness or no emotion. Shared EEG-fMRI representations started 100ms after stimulus onset not only in visual cortex, but also regions involved in social cognition, value representation and autobiographical memory, including ventromedial prefrontal cortex, temporoparietal junction and posterior cingulate. According to established models of face recognition, these activations precede the stage of structural face encoding at around 170 ms after stimulus onset. Representations in fusiform gyrus, amygdala, insular cortex and N. accumbens were evident after 200 ms. Representations related to romantic love emerged after 400ms in subcortical brain regions associated with reward. Our results point to the prioritized processing of personal relevance with extensive cortical representation as soon as 100 ms after stimulus onset; preceding the stage of structural face encoding.

engage the amygdala without conscious processing. Behavioural evidence suggests an even broader range of impressions related to relevance, such as threat, likeability, attractiveness, trustworthiness, are consistently made based on very brief (40-100ms) exposure to face stimuli (11,12).
However, studies examining rapid responses to face stimuli have typically examined responses to unfamiliar faces, which it can be argued are of no personal relevance to the participants in an experiment. How quickly personal relevance can be extracted from faces, and where in visual or higher level processing streams this occurs, is largely unknown. FMRI studies on romantic love have compared responses to faces of loved ones with responses to friends or other acquaintances, finding involvement of rewardrelated subcortical areas like the ventral tegmental areas, caudate and putamen, but also of insula and anterior cingulate cortex, as well as of occipital and fusiform gyrus, superior temporal gyrus and dorsolateral middle frontal gyrus (for review, see 13).
Evidence for amygdala involvement is heterogeneous, with some studies reporting activations and others deactivations in response to a loved one's face (14,15). These studies have not provided evidence on how quickly or in what sequence these neural effects occur. EEG findings suggests that earliest effects of personal relevance of faces might occur at the stage of structural face encoding around 170 ms after stimulus onset (for review, see 16); but were more robustly reported at higher-order processing stages, modulating the amplitudes of the P3 component (17). Findings from the language domain, however, have shown even earlier effects of personally relevant contexts during sensory processing around 100 ms after word onset (18), raising the possibility that information on the personal relevance of faces might also be extracted at an early stage.
While the personal relevance of faces has not been the primary focus of previous research, face identity and face familiarity have been widely investigated. Studies using machine learning have shown that EEG and MEG signals within 100 ms after stimulus onset contain sufficient information for decoding of face identity based on physical features alone (19,20). Whether face familiarity can be decoded this quickly, based on stimulus properties alone, is unclear. Familiar faces are recognized in a highly robust and effortless way, whereas recognition of unfamiliar faces is more error-prone and relies heavily on image matching (16,21). Neuroimaging research suggests that this advantage involves a core face processing network comprised of occipital face area (OFA), fusiform face area (FFA), posterior superior temporal sulcus (pSTS), anterior STS, and inferior frontal gyrus, as well as an extended face processing network, which includes brain regions involved in episodic memories (precuneous, anterior temporal cortex), person knowledge (temporoparietal junction (TPJ), medial prefrontal cortex) and emotion processing (amygdala, insula); (22,23). The extended network shows stronger activation for familiar compared to unfamiliar faces (23)(24)(25), which might be the neural basis for behavioural findings of more robust face detection and recognition for familiar faces (16).
While face familiarity in everyday life overlaps considerably with personal relevance (most familiar faces we see are family, friends and colleagues), familiarity does not imply personal relevance. For example, while famous faces can be highly familiar, they are rarely highly relevant for the individual. So far, few studies have contrasted familiarity and personal relevance, but existing evidence suggests that brain activation and behavioural performance are more influenced by the personal relevance for the observer than familiarity per se, with strongest effects for relevant others and loved ones, followed by friends and other personally familiar people, and with weakest effects for famous faces (16,26). It is unknown at what stage during face processing the brain can detect personal relevance, and specifically whether determining the personal relevance of a face requires structural face encoding, as assumed by models of face identity processing.
In the present study, we aimed to investigate not only the brain network underlying processing of the emotional and personal relevance of faces, but also the temporal unfolding of brain activity, revealed through simultaneous recordings of EEG and fMRI.
We presented heterosexual females in a stable romantic relationship with pictures of their romantic partner, a close male friend, and a male stranger, displaying fearful, happy and neutral facial expressions. We combined EEG and fMRI data using representational similarity analyses (RSA; 27, 28), a subtype of multivariate pattern analysis (MVPA). Instead of investigating amount of activation in a brain region (as typical for fMRI), or activity at a given time (as with EEG/ERP), RSA focuses on the representational structure of activations to describe the information space captured by a given measure. This structure is described by representational dissimilarity matrices (RDMs; see Figure 1), composed of pairwise distances between condition-specific activations, reflecting abstractions in information space. Because they are abstracted from the physical properties of the original measures, RDMs can be compared across different modalities, such as EEG and fMRI, despite profound differences in temporal and spatial resolution. Furthermore, data-derived RDMs can be compared to conceptual, model-derived RDMs, in order to test and contrast theoretical predictions. In this study, we compared RDMs centred around each voxel in fMRI data and for each time point in the EEG, thus allowing for the temporally and spatially precise examination of processing of personal relevance and emotional expression. Additionally, we tested fMRI RDMs against model-based RDMs reflecting the processing of personal relevance and emotional expressions.
We predicted that personally relevant faces would elicit increased BOLD activation in the extended face network and increased ERP amplitudes at later processing stages. We further predicted that the fMRI RDM structure in these extended face processing regions would correlate with model-based RDMs representing personal relevance, and with RDM structure of EEG at later processing stages.
Emotional relevance was predicted to increase BOLD responses in amygdala, insula and orbitofrontal cortex, and ERP amplitudes at sensory and later stages (P1 and P3, respectively). We predicted that fMRI RDM structure in these brain regions would correlate with model-based RDMs representing emotional relevance, and with RDM structure of EEG as early as 100ms.  For the comparison of emotion conditions (Fig. 2) there was widely distributed activation for Happy > Neutral, including a large cluster in the parieto-occipital cortex (including the intracalcarine cortex, lingual gyrus and precuneus), cerebellum and brain stem. Further clusters were located in anterior brain regions, including the medial and ventromedial prefrontal cortex, orbitofrontal cortex, ACC, inferior and superior frontal gyrus, insular cortex. Finally, significant activation was seen subcortically, including the left amygdala, bilateral thalamus and left caudate.
In the contrast Happy > Fear, significant activation was found in bilateral insula and

Emotional expression
Comparisons with the Emotion RDM revealed representations that were less widespread than effects of Identity, but included bilateral amygdala, right putamen, the orbitofrontal and ventromedial prefrontal cortex, the temporoparietal junction (TPJ) and left inferior frontal gyrus. A theoretical RDM for valence (distance = 1 from neutral to both fear and happy, distance = 2 between fear and happy) showed activation in most areas of the core and extended face network, including bilateral amygdala, insula, hippocampus, ventromedial and orbitofrontal PFC (Fig. 5). The analyses of theoretical emotional arousal (assuming increased values for fearful and happy compared to neutral faces) revealed no significant representations. Activation thresholded with FWE < 0.05 using permutations and TFCE.

Combined EEG-fMRI RSA Analyses
Peaks in EEG RDM structure were evident at 52 ms, 108 ms, 204 ms, 308 ms, 428 ms, and 660 ms. Accordingly, these RDMs were used as searchlight RDMs in the whole-brain analyses on single-subject fMRI data (Fig. 6).
For the EEG RDM at 52 ms after stimulus onset there were no brain regions with significantly correlated spatial RDMs. EEG representations at 108 ms after stimulus

Discussion
While it is accepted that emotional facial expressions are processed extremely rapidly, studies of the perception of face familiarity have focussed on the more elaborate and slower processing involved in face recognition. Yet the faces of those closest to us are arguably more personally relevant than the emotional expressions of strangers, and thus might receive a high degree of processing priority. In this study, we investigated the time course of neural processing of emotional and personal relevance in faces using simultaneous EEG-fMRI and representational similarity analyses. Our results point to the strong and rapid impact of personal relevance on face processing, with increased activation in the core and extended face processing network, as well as fast attention allocation in ERPs, and shared representations between EEG and fMRI spatial activation patterns in multiple cortical regions starting as early as 100 ms after stimulus onset.
The effects of emotional facial expressions on fMRI activation in this study were limited to happy compared to neutral and fearful faces, which elicited increased BOLD activation in a network previously associated with emotion processing, including the insular and orbitofrontal cortex, amygdala, and thalamus. These results suggest the preferential processing of happy facial expressions in a personally relevant (experimental) context in healthy subjects. This result is corroborated with our finding of increased model-based representations of stimulus valence (dimensional coding fearful -neutral -happy) in BOLD activation patterns, rather than undirected emotion categories or arousal values.
The personal relevance of the faces had a far more substantial impact on ERP and fMRI measures than their emotional content. In accordance with previous research, our unimodal fMRI analyses showed that personal relevance increased activation in the core and extended face processing network (16,22,25) In contrast, representations corresponding to the romantic love RDM were more focal and mostly outside the face network, including subcortical regions in putamen, caudate, amygdala, thalamus, cerebellum, and regions of the brain stem including corticospinal and corticobulbar tracts and substantia nigra, but also cortical regions including the insular cortex, medial and ventromedial prefrontal cortex, inferior frontal gyrus and in the intracalcarine cortex. These results are consistent with previous findings that the processing of a loved one's face engages areas of the brain's dopaminergic reward circuitry in the dorsal striatum (putamen and caudate) (13) and substantia nigra.

Consistent with this, region-of-interest analyses revealed increased BOLD activation for
Partner compared to Stranger in the ventral tegmental area, which is the major source of dopaminergic neurons in the mesolimbic dopamine system. In the left VTA, analyses also showed an interaction of personal relevance and emotional expressions: While activity was generally increased in response to the partner's face compared to a stranger, activation to the Friend's face was increased only for happy expressions, showing a coding of both identity and emotional expression that reflects the degree of personal reward value.
Event-related potentials revealed that personal relevance modulated sensory perception (P1), structural encoding (N170), and higher-order processing (P3/LPC), with increased amplitudes for the Partner's face. The finding of increased P1 amplitudes is especially noteworthy, since it is related to perceptual processing at around 100 ms after stimulus onset in the extrastriate cortex, and thus precedes structural face encoding as indexed by the N170 component (29), as well as subsequent face identification processes (2).
Increased activation in visual areas within 100 ms after stimulus onset have been reported for stimuli with emotional and motivational relevance (30), and have recently been related to associative learning of physical stimulus properties (31)(32)(33). Thus, our results are not necessarily in conflict with models of face recognition based on structural encoding and associative memory (2), but might reflect reward value associated with (the physical features of) a loved one's face, extracted prior to structural encoding mechanisms. Corroborating previous findings, analyses of ERP amplitudes of higherlevel processing as indexed by the P3 and late positive complex also revealed increased activation for Partner compared to Friend and Stranger. Again, these findings demonstrate that our experimental effects reflect the personal relevance of presented faces rather than simply familiarity or identity.
While ERP analyses suggest early modulation of visual perceptual processes by personal relevance, the combination of EEG and fMRI data using RSA analyses allows for the timeresolved analyses of representations within the fMRI. These analyses suggest fast modulation of neural processing across a far more widespread collection of cortical regions. Shared EEG-fMRI representations were first apparent as early as 100 ms after stimulus onset not only in the visual cortex, but also in the ventromedial and medial prefrontal cortex, regions involved in value encoding and self-referential processing (34,35). Interestingly, the medial prefrontal cortex with its ability to encode social value and reward, was recently also discussed as a guiding structure in infants' cortical face specialisation (36). Finally, at the stage of higher-order processing from approximately 400 ms after stimulus onset, representations were identified in all regions of the core and extended face processing network, including amygdala, insular and orbitofrontal cortex (22), but also in regions identified with the theoretical RDM for romantic love, like putamen, cerebellum and regions of the brain stem. Consistent with EEG RDMs, ERPs showed evidence for differential processing of romantic partners at a similar time scale. Across modalities and analyses types (unimodal analyses, multimodal and theoretical RSA), our results suggest that early processing mainly reflects the amplification of familiarity (i.e., Partner and Friend compared to Stranger); whereas effects specific for romantic love only emerge at a higher-order processing stage. However, it is important to keep in mind that the terms 'familiarity' and 'love' in this case are not exclusively related to Friend and Partner, respectively, but reflect different aspects of close relationships (39): Both Partner and Stranger are highly personally relevant; and one can feel friendship love towards a close friend. On the other hand, a romantic partner can also serve the role of a friend. Thus, romantic love in our study refers to representations that are unique to a romantic relationship, over and above a close friendship.
In this study, emotional expressions were associated with relatively weak effects; while effects related to personal relevance clearly dominated. This should not come as a surprise. Being able to rapidly identify and respond to those closest to us is a fundamental ability present from early infancy (40). It also suggests that emotional expressions are not processed in a rigid, automatic way, but that brain responses rather Our results therefore present a compelling case for including stimuli tailored to the individual in future research, rather than relying on standardised datasets of largely personally irrelevant stimuli.
Understanding the time course and neural circuitry of processing both emotional and personal relevance might be of special importance in clinical conditions, where processing of emotional or social information is specifically disrupted. Autism spectrum conditions (ASC) are one such example: A vast body of research reports atypical processing of (emotional) faces; (41); however, research also suggests typical processing of personally relevant faces (42) . Therefore, atypical (emotional) face processing might reflect altered personal relevance attached to strangers, rather than a dysfunction of the neural face processing architecture, with potential consequences for the design of targeted interventions.
What is clear from this study is that in basic social neuroscience, as well as in clinical research, there are compelling reasons to study responses to individualised social stimuli. After all, by far the most pervasive social stimuli we encounter in our daily lives, the ones our brains are most attuned to, are our close friends and loved ones.

Methods
The study was reviewed and approved by the University of Reading research ethics committee. All participants provided informed consent before taking part in the study.

Participants
Data were collected from 22 female participants (mean age = 19.8 years, sd = 0.9 years).
For four participants, no EEG data was available (due to technical problems and insufficient data quality). All participants were in a heterosexual romantic relationship    Offline, MR gradient artefacts were identified using synchronisation markers from the scanner and removed using a modified version of the template subtraction algorithm (52) as implemented in BrainVision Analyzer 2 (BrainProducts). Gradient artefacts were removed from continuous, baseline corrected data (using the whole artefact for baseline correction) with a sliding window of 21 artefacts. After correction, data were downsampled to 250 Hz and low-pass filtered using an FIR filter of 70 Hz cutoff.
Ballistocardiographic artefacts were identified using the ECG channel with a semiautomatic template matching procedure; correction was performed using a template subtraction approach based on a sliding window of 15 pulse intervals. In order to identify additional artefacts caused by eye blinks, eye movements and residual ballistocardiographic artefacts, we performed a restricted Infomax ICA (53) and removed artefact components based on their topography and time course. Data was rereferenced to average reference and segmented into epochs from -100 ms to 800 ms relative to stimulus onset, and baseline-corrected using a 100 ms pre-stimulus baseline.
Trials with activity exceeding ± 100 μV or voltage steps larger than 100 μV were marked as artefact trials (0.6 % of trials) and excluded from further analyses. Data were averaged per participant and experimental condition.

Data analyses
All voxelwise fMRI statistical tests were one-sided. All other tests were two-sided unless otherwise stated. and LPC, electrode (7). Huynh-Feldt correction was applied for violations of sphericity in rm-ANOVAs; in post-tests, p-values were corrected for multiple comparisons using Bonferroni correction.

RSA analyses
RSA analyses were performed using the CoSMoMVPA toolbox (55) on Matlab R2017b.
For plotting, we further used the toolbox for representational similarity analyses (56).

Conceptual model RDMs
Conceptual model RDMs were used as searchlight with voxel-wise fMRI RDMs in order to identify brain regions representing different aspects of the face stimuli. Conceptual model RDMs for Emotion and Identity were created based on the assumption of high similarity within experimental categories and low similarity across categories (coded as 0 and 1, respectively). We also computed conceptual model RDMs for face familiarity (Partner/Friend vs. Stranger) and romantic love (Partner vs. Friend/Stranger; see Figure 4, and for emotional valence (distance between happy/fearful vs. neutral = 1, distance happy vs. fearful = 2; Figure 5) and arousal (distance happy/fearful vs. neutral = 1, distance happy vs. fearful = 0).

Joint EEG-fMRI
For combination of EEG and fMRI data, we performed representational similarity analyses using EEG RDMs as a searchlight with each participant's individual fMRI RDMs for each voxel using Pearson's R. Resulting brain maps of similarity were combined across subjects using permutation tests and threshold-free cluster enhancement in order to ensure a corrected Familywise Error (FWE) of < .05.