Modulation of early level EEG signatures by distributed facial emotion cues

Face perception plays an important role in our daily social interactions, as it is essential to recognize emotions. The N170 Event Related Potential (ERP) component has been widely identified as a major face-sensitive neuronal marker. However, despite extensive investigations conducted to examine this electroencephalographic pattern, there is yet no agreement regarding its sensitivity to the content of facial expressions. Here, we aim to clarify the EEG signatures of the recognition of facial expressions by investigating ERP components that we hypothesize to be associated with this cognitive process. We asked the question whether the recognition of facial expressions is encoded by the N170 as weel as at the level of P100 and P250. In order to test this hypothesis, we analysed differences in amplitudes and latencies for the three ERPs, in a sample of 20 participants. A visual paradigm requiring explicit recognition of happy, sad and neutral faces was used. The facial cues were explicitly controlled to vary only regarding mouth and eye components. We found that non neutral emotion expressions elicit a response difference in the amplitude of N170 and P250. In contrast with the P100, there by excluding a role for low level factors. Our study brings new light to the controversy whether emotional face expressions modulate early visual response components, which have been often analysed apart. The results support the tenet that neutral and emotional faces evoke distinct N170 patterns, but go further by revealing that this is also true for P250, unlike the P100.

Here, we aim to clarify the EEG signatures of the recognition of facial expressions by investigating 23 ERP components that we hypothesize to be associated with this cognitive process. We asked the 24 question whether the recognition of facial expressions is encoded by the N170 as weel as at the 25 level of P100 and P250. In order to test this hypothesis, we analysed differences in amplitudes 26 and latencies for the three ERPs, in a sample of 20 participants. A visual paradigm requiring 27 explicit recognition of happy, sad and neutral faces was used. The facial cues were explicitly 28 controlled to vary only regarding mouth and eye components. We found that non neutral emotion 29 expressions elicit a response difference in the amplitude of N170 and P250. In contrast with the 30 P100, there by excluding a role for low level factors.

31
Our study brings new light to the controversy whether emotional face expressions modulate early 32 visual response components, which have been often analysed apart. The results support the tenet 33 that neutral and emotional faces evoke distinct N170 patterns, but go further by revealing that this 34 is also true for P250, unlike the P100.

37
The dynamics of face perception involve several brain regions and circuits responsible for the 38 early and high level visual processing of faces(1). Studies using functional magnetic resonance 39 imaging (fMRI) can identify the brain regions underlying these processes but cannot capture their 40 temporal properties. Thus, the dynamics of face perception neuronal mechanisms have been 41 extensively studied based on electroencephalographic (EEG) signals (2).

42
Social attention has been used as a synonym for non-verbal social communication behaviors. Of 43 socially relevant stimuli, faces and gazes are the two most important elements triggering this 44 cognitive process (3) (4). Categorization facial expressions, implies allocation of selective 45 attention which may also modulate the neural response differences between the categories of 46 emotions. When the participants perform implicit tasks more directed to the aspect emotional 47 expression, there seems to be a lower probability of finding differences between the categories 48 of emotions, namely for the N170 component (5).

49
Facial expressions elicit robust neuronal responses that can be measured through event-related 50 potentials (ERP) analysis. The lateral occipitotemporal face-sensitive N170 component is 51 generally used as a major neuronal marker of face recognition. However, the literature is not 52 unanimous about how N170 is affected by the emotional content of facial expressions(6,7). Some 53 authors suggest N170 occurs as a result of early automatic structural encoding of faces, which 54 occurs before a comparison of these structural descriptions with representations stored in 55 memory (8,9). On the other hand, others challenge the view that structural encoding is temporally 56 distinct from emotion processing and defend that the N170 can be modulated by emotional 57 expressions, as shown by larger amplitude and longer latency (10-12).

58
Previous findings also suggest the involvement of P100 in face-specific visual processing (13,14). 59 However, this posterior component often also assumed to have an extrastriate contribution is 60 believed to reflect low-level visual features processing and, therefore, the P100 role in face 61 recognition remains an open question (5). Another potentially relevant ERP is the P250. Like the 62 commonly measured N170, the P250 is also maximal at occipito-temporal sites. There is evidence 63 for P250 modulation during face perception (15,16) , but this ERP component has been often 64 related to higher-level nonspecific aspects of face processing (17,18).

65
According to Puce et al. (19), these ERPs can be grouped in a robust positive-negative-positive 66 (P100-N170-P250) ERP complex, serving as neuroelectric markers in the investigation of the 67 visual processes involved in the recognition of facial expressions. There is also evidence that both 68 P100 and N170 are also involved in generating differencial responses to neutral vs. emotional 69 expressions, similarly regions such as the amygdala (20). together the attributes of the participants, as well as data acquisition, EEG processing and data 95 analysis, since the information is similar between experimets. Separately we present the 96 stimulation paradigms and the results of the two experiments.

97
The use of the paradigm described below required participants to recognize facial expressions 98 which was informative for a subsequent action related to error monitoring neural processes (not 99 reported in this paper). For this, two experiments were performed, the main tak objective is the 100 recognition and distinction between facial expressions (happy, sad and neutral) by the participant.

101
The two experiments differ in the existence or not of a neutral face in the initial period (Experiment 102 1) or in the instruction phase (Experiment 2) which allowed to control for the role of an explicit 103 instruction for the neutral stimulus.

105
In total, 40 adult with normal or corrected-to-normal vision and no medical or psychological 106 disorders were, included in this study: In the experiment one, 20 individuals were included (nine 107 females), mostly right-handed (19), and aged between 20 and 36 years (26.8 ± 4.514); in the 108 experiment two, 20 individuals were included (11 females), (all right-handed), and aged between 109 20 to 33 years (25.7 ± 4.193), of these subjects, 5 participated in both experiments.  (23), were presented on a 17-inch monitor 122 situated 60 cm away from the subject. The facial expression images had a mean luminance of 123 7.67x10 1 cd/m 2 (with screen luminace ranging from 2.44x10 1 cd/m 2 to 1.76x10 2 cd/m 2 ). 124 Neutral, happy, and sad faces (height 4.55º and width 4.90º) were presented to the participants 125 in a pseudo-randomized order ( Figure 1A). A go/no-go task, based on facial expression 126 recognition, was used to guarantee implicit expression processing ( Figure 1B). 127 128 129

133
The stimulation paradigm included six conditions: rest, gap, emotional expression, fixation, target, 134 and response. Initially, a neutral face was displayed for 1000 ms (rest period), followed by a gap 135 period of 500 ms. Then, the instruction for the go/no-go task is presented during 750 ms. Here, a 136 happy, or sad face was presented. The expression type cued the subsequent action, and thereby 137 required attentive processing.A happy face with a gaze means go and perform a saccade in the 138 same gaze direction; a sad face with a gaze means go and perform a saccade in the opposite 139 gaze direction; a happy or sad face without a gaze means to no-go. After a fixation period (500 -140 1000 ms) and a target shown for 200 ms in the same direction as the gaze (height and width 141 0.72º), a black background appeared for 1500 ms, where the participants performed the task. A standard set of colour photographs consisting of a male, each depicting happy, sad, and neutral 149 faces, obtained from Radbound Faces Database (23), were presented on a () monitor situated 60 150 cm away from the subject. Neutral, happy, and sad faces (height 9.02º and width ) were 151 presented to the participants in a pseudo-randomized order ( Figure 1A). A go/no-go task, based 152 on facial expression recognition, was used to guarantee implicit expression processing ( Figure  153 1B). 154 155 156 cues in a go/no-go paradigm (B) and allowed to study the implicit expression processing. The sequence between face 158 presentation and participants' response is also illustrated.

160
As in experiment 1, this second paradigm also includes six conditions: rest (neutral faces), gap 1, 161 social expression, gap 2, action instruction, and action. Initially, a neutral face was displayed for 162 1000 ms (rest period), followed by a gap period of 200 or 500 ms. Then, the instruction (social 163 expression) for the go/no-go task is presented during 350 ms. Here, a happy, sad or neutral face 164 was presented. It is in this condition that the biggest difference between the two experiences, in February 27, 2021

5/ 15
165 experiment one we only had happy and sad as social expression. A happy face with a gaze means 166 go and perform a saccade or button press in the same gaze direction; a sad face with a gaze 167 means go and perform a saccade or button press in the opposite gaze direction; a happy, sad or 168 neutral face without a gaze means to no-go. After a gap 2 period (200 ms) and an action 169 instruction been shown for 350 ms (target diamond (♦) -height and weidth 2.517º) or button press 170 (target: square ( □), -height and weidth 1.819) , a black background appeared for 1000 ms, 171 where the participants performed the task. EEG data were recorded using a 64 electrodes cap (Compumedics Quick cap; NeuroScan, 177 USA). The scalp of the participants was first cleaned using abrasive gel and then the electrodes 178 cap was placed on their head according to the international 10-20 standard system.

179
Electrooculogram (EOG) data were recorded via two pairs of additional electrodes, placed above 180 and below the left eye and in the external corner of both eyes. The reference electrode was 181 located between Cz and CPz The impedance of the electrodes was kept under 20 kΩ during the 182 recordings. The electrodes were connected directly to the SynAmps 2 amplifier system 183 (Compumedics NeuroScan, Texas, USA) and sampled at 1000Hz. Data were recorded using the 184 Curry Neuroimage 7.08 (NeuroScan, USA). For each paradigm, the participants were informed 185 about the respective task. The total duration of the experimental procedure including the 186 preparation procedures took around 60 min.

187
The eyetracking (ET) data acquisition started with the calibration of the eye-tracker. The data 188 were recorded at 120Hz in a tower-mounted high accuracy (0.25° -0.5°) monocular eye-tracker 189 (iView X™ Hi-Speed 1250Hz, SMI -SensoMotoric Instruments, Teltow, Germany). In both experiments, we used MATLAB home-made script ( R2018b) and the EEGLAB toolbox 193 functions (version 2) for EEG signal preprocessing and analysis.

195
The EEG data were downsampled to 500 Hz and filtered between 0.5 and 45 Hz. Noisy channels 196 were removed and the electrodes were re-referenced to the average of all EEG (excluding EOG) 197 channels. The data were segmented into epochs of 1700 ms length in experiment 1 and 1200 ms 198 in experiment 2, with a 200 ms pre-instruction baseline. Epochs were visually inspected, and 199 noisy trials were removed. Ocular, muscular and cardiac artifacts were removed from all EEG 200 channels based on the independent components analysis (ICA) (24). The noisy channels 201 previously removed were interpolated (spherical interpolation). The N170 ERP component has been described as a negative peak around 170 ms after the 214 stimulus beginning, particularly incident at the parieto-occipital electrodes (25,26), whereas, P100 215 and P250 are ERP positive components. Their topology has also been described in the lateral 216 parieto-occipital cortex as all these ERP components related to rapid processes of selective 217 attention (

265
When comparing EEG responses to happy, sad, and neutral faces at the level of P100 we found 266 neither differences in amplitude nor latency. The grand average ERP waveforms are illustrated in 267 Figure 3, whereas the statistical results are summarized in Figure 4.

287
Error bars depict the standard error of the mean.

289 290 291
3.2.2 N170 When comparing the three conditions, at the level of N170 we found differences in the amplitude 294 between neutral-sad in the right hemisphere and between neutral-happy in both hemispheres. 295 The grand average ERP waveforms are illustrated in Figure 3, whereas the statistical results are 296 summarized in Figure 4.

310
When comparing the three conditions at the level of P250 we found differences in the amplitude 311 between neutral-sad and neutral-happy in both hemispheres and in the latency neutral and happy 312 in right hemisphere. The grand average ERP waveforms are illustrated in Figure 3, whereas the 313 statistical results are summarized in Figure 4.

327
When comparing EEG responses to happy, sad, and neutral faces at the level of P100 we found 328 neither differences in amplitude nor latency. The grand average ERP waveforms are illustrated in 329 Figure 5, whereas the statistical results are summarized in Figure 6.

330
In the right hemisphere the P100 average amplitude was 3.229±0.651 µV, 3.367±0.602 µV and 331 3.359±0.581 µV for happy, sad and neutral conditions respectively. Latencies were recorded at 332 129 ms after happy stimuli, 130 ms after sad and 132 ms after neutral stimuli. In the left 333 hemisphere, P100 average amplitude was 4.488±0.877 µV, 4.611±0.946 µV and 5.007±0.958 µV 334 for happy, sad and neutral faces. Latencies were recorded at 131 ms after happy stimuli and 133 335 ms after sad and neural stimulis.

351
When comparing EEG responses to happy, sad, and neutral faces at the level of N170 we found 352 differences in the amplitude between neutral and sad faces in both hemispheres. The grand 353 average ERP waveforms are illustrated in Figure 5, whereas the statistical results are summarized 354 in Figure 6.

368
When comparing EEG responses to happy, sad, and neutral faces at the level of P250 we found 369 differences in the amplitude between neutral and happy, and neutral and sad in both 370 hemispheres. The grand average ERP waveforms are illustrated in Figure 5, whereas the 371 statistical results are summarized in Figure 6. The results of the sLoreta for ERPs modulated by emotional expressions (happy and sad) for 170 386 ms and 250 ms post-stimulus presentation onset are depicted in fig.7. 387 Sources estimation for expression at 170 ms (N170) was found in the precuneus, BA 7 (MNI 388 coordinates: x= -5, y= -80, z= 50) and at 250 ms (P250) was found in the postcentral gyrus, BA 5 389 (MNI coordinates: x= -5, y=-50, z= 70), significant at P = 0.0002, two-tailed t test. 390 391 392 In this study, we investigated the temporal dynamics of the neural mechanisms underlying face 397 recognition, following the hypothesis that not only the N170 but also other related components as 398 P100 and P250 are modulated by the emotional content of facial stimuli. To this end, we used an 399 implicit face recognition task, while controlling for selective attention. We tested for amplitude and 400 latency differences between neutral and emotional expressions, particularly happy and sad faces. 401 Moreover, we aimed at exploring the neural sources of social attention.

402
N170 is a well-established ERP component of face processing (4) . However, there is no 403 consensus regarding N170 selectivty for the content of facial expressions. Some studies support 404 that the presence of N170 amplitude dependence on emotional facial expressions (31,32), 405 whereas others suggest that this ERP is modulated by more low level features of facial stimuli but 406 does not discriminate between expressions (9,33). Our results contribute to this debate by

407
showing the presence of an effect when attention is controlled for.

408
Our results show indeed that there is a clear difference in electrophysiological responses 409 between expressions (happy and sad) and neutral faces. In both experiments, a similar pattern 410 was observed, we did see a desponse difference between facial expressions from neutral faces 411 (except for the P100), with no significant differences between sad and happy faces, in both 412 experiments. These results are consistent with previous studies in which the N170 was affected 413 by emotion expression, specifically happy and fearful faces had larger amplitude than neutral 414 faces (34), but did not discriminate between different emotions (35). N170 has been proposed 415 to represent an early configural stages of face processing, which may reflect an activation related 416 to the structural coding of faces, and from this point of view response levels might be expected to 417 be relatively invariant to emotional details, in facial expressions.However, Luo, et al., 2010 (33) 418 proposed that ERPs between the 150 to 300 ms time period constitute a "second stage" of 419 expression processing that is already sensitive to emotionality in general (compared to a neutral 420 expression); arguing that the differentiation between specific expressions / emotions occurs only 421 in a "third stage" that starts after 300ms.

422
The earliest stage of visual processing, P100 is associated with processing sensory 423 characteristics of a visual stimulus, there by unlikely to be modifiable by familiarity and expressions (36). However, there are studies that suggest that differentiation between 425 expressions can occur, such as fear and anger or happiness and anger (37,38).

426
According to our results, no significant difference was identified between the P100 amplitudes in 427 the three facial expressions (happy, sad and neutral),as such we argue that this ERP component 428 is not modulated by expressions. P100 can also be modulated by attention, so the increase in 429 amplitude in prior studies may be due not only to expressions but also to increased effects of 430 attention (13). Under controlled attention, (futher optimized in experiment 2), we found no 431 significant effect of the presence of emotion cues, concerning the P100. For comparison, there is 432 corroborating evidence that as early as the P100 component, negative and positive expressions, 433 such as fear, sadness or happiness, cannot be differentiated (32,37). We did not observe P100 434 amplitude significant difference emotions types (happy and sad), or between both emotional faces 435 types and the neutral faces. These results are in agreement with other authors, such as (25,39).

436
P250 is the last ERP that forms the P100-N170-P250 complex, according to previous studies like 437 (40) this ERP is responsive to emotional expressions, amplitudes are augmented for emotional 438 expressions compared to neutral.

439
Our results show, in both experiments, an amplitude increase (more negative) in happy and sad 440 facial expressions for the P250. Unlike the previous ERPs (P100, N170), the neutral faces were 441 the ones with the highest amplitudes, with significant differences with the happy and sad 442 amplitude.

443
In this work, latency was only significant in the right hemisphere of the first experiment, in ERP 444 P250 between neutral and happy faces. These results do agree with previous studies (41), who 445 found latency change for expressions when compared to neutral stimuli, justifying that this 446 difference reflected the impact voluntary attention, which is also consistent with the conditions of 447 experiment 1.

448
Both in experiment 1 and 2, differences were only found between the conditions expressions -449 neutral faces and not between the emotional expression types happy -sad, showing that the 450 differences between both experiments did not change the general patterns of results.

451
Our source analysis allowed to investigate the spatical location of the signals studied, as such, 452 to analyse the sources of facial expressions and to compare results from previous studies . It 453 wasexpected that the location of the active sources should be found in the parieto-occipital 454 region, because this region contributes mainly to the processing of facial expressions (42).

455
N170 and P250 were modulated by the expressions happy and sad, so we went to analyse 456 where the sources of each one are located, and confirmed that they match The source of N170 457 was identified in the precuneus. This region is involved in visuospatial processing, directing 458 attention in space and is one of the core regions of the perspective taking network, both from a 459 cognitive and affective point of view (43,44). The source of the P250 was located in the 460 postcentral gyrus that belongs to the superior parietal lobe (SPL) which are major areas in the 461 attention central system(45). The precuneus is also selectively connected to the SPL. 462 Converging evidence then suggests that the SPL and the precuneus cooperate in directing 463 attention in space not only during the execution of goal-directed movements, but also in the 464 absence of overt motor responses(46). In a Simon et al., 2002 (47) study participants performed 465 various tasks, including attention, pointing, grabbing, saccades and calculating. Both precuneus 466 and SPL were activated by saccadic and pointing task while theprecuneus showed more 467 activation only for attention. Le et al., 1998 (48) reported that the shift of attention to visual