EEG frequency tagging reveals neural entrainment to people moving in synchrony

Humans and other animals have evolved to act in groups, but how does the brain distinguish multiple people moving in group from multiple people moving independently? Across three experiments, we test whether biological motion perception depends on the spatiotemporal relationships among people moving together. In Experiment 1, we apply EEG frequency tagging to apparent biological motion and show that fluently ordered sequences of body postures drive brain activity at three hierarchical levels of biological motion processing: image, body sequence, and movement. We then show that movement-, but not body- or image-related brain responses are enhanced when observing four agents moving in synchrony. Neural entrainment was strongest for fluently moving synchronous groups (Experiment 2), displayed in upright orientation (Experiment 3). Our findings show that the brain preferentially entrains to the collective movement of human agents, deploying perceptual organization principles of synchrony and common fate for the purpose of social perception.

However, while the adaptive functions of group behavior are well-documented, little is known about how the brain represents the movements of groups (Riddell & Lappe, 2018;Sweeny et al., 2013). Indeed, existing models of biological motion perception have largely focused on processing the movements of individuals (Giese & Poggio, 2003) or pairs (Hovaidi-Ardestani et al., 2018). These models have distinguished two pathways for processing biological movement. In the structure-from-motion pathway, biological motion perception arises from the kinematics of the observed actions; in the motion-from-structure pathway, it arises from combining sequences of static body shape snapshots into coherent movement (Giese & Poggio, 2003;Lange & Lappe, 2006). This dual pathway structure is supported by evidence that biological motion perception does not require moving stimuli but can also be induced by sequences of static body images (Orgs et al., , 2013, through neurons that integrate static body information over time (Jellema & Perrett, 2003;Singer & Sheinberg, 2010), both in extra-striate body-and movement specific brain areas and in motor areas of the brain (Downing et al., 2006;Orgs et al., 2016;Stevens et al., 2006). 4 Thus, research indicates that processing individual actions involves temporal integration of static body snapshots into movements (Giese & Poggio, 2003;Lange & Lappe, 2006). But how do these mechanisms contribute to group perception? Speaking to this question, recent studies have shown that the actions of multiple people can be processed in parallel (Cracco et al., 2015(Cracco et al., , 2016Cracco & Brass, 2018a, 2018b, 2018cCracco & Cooper, 2019;Tsai et al., 2011). However, whether multiple people form a group is determined not by the number of people but primarily by the relationships among their movements (Templeton et al., 2018). Hence, a key question is how the brain distinguishes multiple people moving together from multiple people moving alone. Here, we test the hypothesis that this involves two hierarchical stages of biological motion perception. First, body postures are combined into movements (Giese & Poggio, 2003;Lange & Lappe, 2006).
Next, these movements are bound into groups, by employing perceptual grouping principles such as synchrony and common fate (Wagemans et al., 2012) for the purpose of social perception.
To test this hypothesis, we developed a new EEG paradigm that dissociates different levels of biological motion processing. Specifically, we combined apparent biological motion (Orgs et al., , 2013 with frequency tagging (Norcia et al., 2015) by letting participants watch repeating sequences of 12 body postures that produced either fluent or non-fluent apparent motion. Importantly, these sequences were symmetrical, with the second half of each sequence mirroring the first half played in reverse ( Figure 1). According to the logic of frequency tagging (Norcia et al., 2015), this procedure should result in brain responses at three hierarchical frequencies: a response coupled to individual image presentation, at base rate (BR), a response coupled to the turning point in the sequence, at half cycle rate (BR/6), and a response coupled to the completion of the entire body posture sequence, at full cycle rate (BR/12).

5
In Experiment 1 (N = 10), we sought to validate this procedure, and to test if it captures the integration of body postures into movements, for a single agent only (Giese & Poggio, 2003;Lange & Lappe, 2006). To this end, we measured brain activity elicited by fluent, non-fluent, and random sequences (Supplementary Videos; Figure 1). Fluent sequences showed body postures in their natural order. This elicited an apparent motion percept that was perturbed in non-fluent sequences by reordering the postures in a disfluent order and in random sequences by presenting images at random. Hence, all sequences were built from the same postures but differed in their structure. Random sequences did not have any structure and were included only to control for the role of sequential structure and its prediction more generally (Baker et al., 2014). In contrast, fluent and non-fluent sequences both had the same symmetrical structure, but this structure became salient only in fluent sequences. This is because the primary percept in fluent sequences is a series of movements, here presented at half cycle rate, whereas the primary percept in non-fluent sequences is a series of body postures, here presented at full cycle rate (Orgs et al., , 2013(Orgs et al., , 2016Shiffrar & Freyd, 1990). As a result, if our task captures the temporal binding of bodies into movements, fluent sequences should drive brain activity mainly at half cycle frequencies, whereas non-fluent sequences should instead drive brain activity mainly at full cycle frequencies. presented at the base rate of 10 Hz. In the fluent condition, images are ordered to induce a coherent movement percept. This percept is perturbed by reordering the images in the non-fluent condition and by showing images at random in the random condition. Fluent and non-fluent sequences have the same symmetrical structure, with the second half of the sequence mirroring the first half in reverse. Hence, in these sequences, a turning point occurs at a frequency of 10/6 Hz (half cycle frequency), and the sequence repeats at a frequency of 10/12 Hz (full cycle frequency).
In Experiments 2-3, we then extended our paradigm from individual to group movements, by showing not one but four actors, moving either in or out of synchrony.
Synchrony is known to signal the size of a group of stimuli (Braddick et al., 2001;Wagemans et al., 2012), purely based on the sustained temporal coupling of their movement trajectories (Brick & Boker, 2011). If the brain recruits the principles of perceptual organization for social perception and binds temporally related movements into holistic group percepts, then movement-related responses should be stronger for synchronous movements, where a clearly defined relationship between the movements allows them to be integrated more easily.
Furthermore, if it is indeed movements that are integrated and not just low-level features, this 7 effect should depend on manipulations known to perturb biological movement processing. Therefore, Experiments 2 (N = 19) and 3 (N = 19) explored the degree to which the influence of synchrony depended on seeing feasible movement trajectories (Experiment 2), by comparing fluent and non-fluent sequences (Lange & Lappe, 2007), and on seeing canonical body orientations (Experiment 3), by comparing upright and inverted agents (Troje & Westhoff, 2006).
Finally, by manipulating both image order and body orientation, we could also investigate whether it is the static, spatial (upright or inverted) or the dynamic, temporal (fluent or non-fluent) features of the stimulus that are linked together. Given that apparent motion perception is a hierarchical process in which movement processing builds on body processing (Giese & Poggio, 2003;Lange & Lappe, 2006, group binding can operate either on bodies, on movements, or on both. As body inversion perturbs configural body processing (Lange & Lappe, 2007;Reed et al., 2003), it should interfere primarily with the binding of bodies. In contrast, disfluent movement perturbs global motion processing (Downing et al., 2006;Lange & Lappe, 2007) and should interfere with the binding of movements instead.

Dissociating Image, Body, and Movement Processing
To test the hypothesis that base rate, half cycle and full cycle responses capture different components of biological motion processing, we submitted the neural responses in Base rate is coupled to image presentation. As a result, base rate responses should primarily capture the processing of those images. If this is the case, they should be strongest rate response was thus strongest in the non-fluent condition but was also clearly visible in the random condition, t(9) = 12.63, p one-tailed < .001, BF 10 = 1.  In sum, our results clearly dissociate three levels of biological motion processing: Base rate responses captured image processing, full cycle responses captured body sequence processing, and half cycle responses captured movement processing. This was evident both from the pattern across conditions and from the topographies. With regard to the condition pattern, we found that the brain responded most strongly to the half cycle structure of the stimulus when images were presented in sequences producing a coherent movement percept but to its full cycle structure when they were presented in sequences precluding such percept (Orgs et al., , 2013(Orgs et al., , 2016. With regard to the topographies, we found that, consistent with neuroimaging research on low-level visual (Van Essen & Maunsell, 1983), body (Downing, 2001), and movement processing (Caspers et al., 2010), base rate responses were strongest over middle occipital areas, full cycle responses over lateral occipitotemporal areas, and half cycle responses additionally activated frontocentral areas. Error bars are standard errors of the mean (SEMs) corrected for within-subject designs (Morey, 2008).

Binding Individual Movements Into Group Movements
Experiment 1 showed that our paradigm could dissociate static and dynamic components of apparent motion processing, thereby capturing the perceptual binding of successive body postures into a continuous movement percept. However, the key question of this study remains to be addressed: how does the brain distinguish multiple people moving independently from multiple people moving in group? Experiments 2 and 3 test the hypothesis that the brain does this by reconstructing not just individual movements from static postures, but by taking into account also the temporal relationships between those movements.  The white dots indicate the electrodes included in the analyses. Error bars are SEMs corrected for within-subject designs (Morey, 2008).

Discussion
This study investigated, in three experiments, the hypothesis that the brain binds temporally related movements into holistic group representations. To this end, we developed a novel approach that combines apparent biological motion with EEG frequency tagging to measure three different components of biological motion processing, early visual image processing (base rate), body sequence processing (full cycle), and movement processing (half cycle), and investigated how these components were modulated by the presence or absence of movement synchrony. Experiment 1 sought to validate our paradigm with individual movements and confirmed that it successfully dissociated between the static body and Half cycle responses in both experiments were stronger for synchronous than for asynchronous movements. In contrast, full cycle and base rate responses were either not sensitive to synchrony (Experiment 2) or stronger for asynchronous movements (Experiment 3). Crucially, the effect of synchrony on half cycle responses was stronger for fluent than disfluent sequences. Given that the internal structure of both sequences was fully matched, this precludes an explanation in terms of low-level processes. Instead, it points towards an interpretation in terms of the temporal relationship between the postures and their apparent movement trajectories. Specifically, our results show that the brain integrates individual movements into group representations by applying the principles of perceptual organization to social perception (Wagemans et al., 2012).
Furthermore, by manipulating body configuration and movement fluency, we could also test specifically which features of biological motion were bound together. That is, body inversion should interfere with the grouping of bodies but not movements (Reed et al., 2003), whereas temporal scrambling should interfere with the grouping of movements but not bodies (Downing et al., 2006). As the modulation of half cycle responses by synchrony was found to depend on movement fluency but not on body inversion, this suggests that it is the dynamic rather than static features of biological motion that are bound into groups. Specifically, our results suggest that synchronous movements are integrated into a single group movement during the temporal integration of body images into movements (Giese & Poggio, 2003;Lange & Lappe, 2006), so that rather than having to analyze the movements of all individual actors, a more efficient movement analysis can take place at the group level. Importantly, however, the half cycle response as such did show an inversion effect. This shows that half cycle responses captured a body-specific process, consistent with the idea that they are the output of a sequential process in which temporal integration of postures into movements occurs only after configural processing of static bodies (Giese & Poggio, 2003;Lange & Lappe, 2006.
The full cycle response behaved opposite to the half cycle response: whenever the latter was increased, such as when seeing synchronous, fluent, or upright movement, the 1 9 former was decreased (Supplementary Analysis). This suggests that perturbing movement processing in apparent biological motion causes frame-by-frame processing to take over. The finding that full cycle responses were, if anything, stronger rather than weaker for inverted bodies further indicates that this frame-by-frame processing does not reflect configural body processing (Giese & Poggio, 2003;Lange & Lappe, 2006Reed et al., 2003), but rather an earlier, more local analysis of body postures. This interpretation is consistent with the lateral-occipital topography of the full cycle response, pointing towards the extra-striate body area (EBA) as underlying source. Indeed, both brain imaging (Brandman & Yovel, 2010) and brain stimulation (Urgesi et al., 2007) studies have shown that EBA is not sensitive to body inversion. Instead, global body processing has been linked to superior parietal and premotor areas (Urgesi et al., 2007). Similarly, in the current study, the half cycle response, which was perturbed for inverted bodies, also showed a frontocentral component in addition to a posterior component.
Thus, our results show that full cycle responses capture local processing of body posture sequences, whereas half cycle responses capture global processing of body postures, together with the integration of these postures into movements (Giese & Poggio, 2003;Lange & Lappe, 2006). Importantly, this indicates that, even though both responses are harmonically related, they captured distinct processes. This is consistent with evidence that harmonically related components of a musical rhythm such as beat and meter (Nozaradan et al., 2011), beat and rhythmic tapping (Nozaradan et al., 2015), or different meters (Chemin et al., 2014) can likewise produce dissociable responses. Adding onto this, the finding that results were comparable at two different base rates (10 Hz and 7.5 Hz) and for both foveal (Experiment 1) and peripheral (Experiments 2 and 3) stimulus presentation further excludes the possibility that they could be explained purely based on processing basic visual features at a specific presentation frequency.

0
Finally, the base rate response followed the full cycle response, with either no synchrony effect (Experiment 2) or stronger responses for asynchronous movements (Experiment 3). Interestingly, this finding goes against the results of a recent study that used EEG frequency tagging to measure brain responses to periodic contrast changes of four pointlight dancers moving in or out of sync (Alp et al., 2017). The results showed that posterior, occipital brain areas responded more strongly to contrast changes when the dancers moved synchronously, thereby suggesting that low-level visual processing is enhanced rather than reduced when observing synchronous movement. However, this study used two frequencies, with the contrast of one half of the dancers changing at F1 and the contrast of the other half changing at F2. This generates not only fundamental responses at the stimulation frequencies but also intermodulation responses at linear combinations of those frequencies (e.g., F1 + F2), reflecting non-linear neural interactions between the two input streams (Norcia et al., 2015;Zemon & Ratcliff, 1984). In Alp et al. (2017), synchrony modulated the intermodulation but not the stimulation frequencies, indicating that synchrony influenced not early visual processing per se, but rather the integration of early visual features across stimuli. These differences in experimental design can therefore explain why we found reduced rather than increased early visual processing in the asynchronous condition.
By investigating how synchrony drives action perception, the current study has important implications for understanding group alignment (Raafat et al., 2009;Shamay-Tsoory et al., 2019) and its social consequences (Rennung & Göritz, 2016). In particular, our results indicate that multiple people acting together as a group form a strong visual trigger to which the brain entrains more easily than to multiple people acting individually. Furthermore, while speculative, the frontocentral activation cluster observed for half cycle responses could indicate that this neural entrainment may have not only a visual but also a sensorimotor component. Regardless of motor involvement, however, increased neural alignment provides 2 1 a neurologically feasible explanation for why both human (Dyer et al., 2009;Raafat et al., 2009) and non-human animals (Couzin, 2018;Sumpter, 2006) tend to move in line with the group.
At the same time, increased neural entrainment may also help explain why synchrony is aesthetically pleasing (Eskenazi et al., 2015;Vicary et al., 2017) and a signal of group cohesion (Lakens & Stel, 2011;Marques-Quinteiro et al., 2019;Wilson & Gos, 2019), as stimuli that are processed more fluently are known to produce a hedonic response (Reber et al., 2004). In line with this view, subjective judgements acquired after Experiments 2-3 showed that participants liked the synchronous videos more than the asynchronous videos and Spearman correlations across these two experiments showed that the difference in liking between both conditions correlated positively with the difference in perceived synchrony, ρ = 0.44, p = .006, but negatively with the difference in perceived complexity, ρ = -0.42, p = .008 ( Figure 5). Finally, our study adds onto a growing body of research investigating ensemble processes in social perception. This research found that two individuals are represented as a single unit when they are either facing (Papeo et al., 2017;Vestner et al., 2019) or interacting (Ding et al., 2017;Liu et al., 2018), and that this changes both how these stimuli are perceived (Liu et al., 2018;Papeo et al., 2017;Vestner et al., 2019) and remembered (Ding et al., 2017;Vestner et al., 2019). By showing that, despite identical input, the brain entrains more strongly to people moving together than to people moving independently, the current results reveal a potential neural mechanism of how ensemble processes shape social perception and evaluation.
To conclude, the current research makes two important contributions. First, it introduces a new approach to study biological motion perception that captures image processing (base rate), local body processing (full cycle), and the temporal integration of configural body snapshots into movements (half cycle) at different frequencies of the EEG spectrum. Second, it extends existing models of biological motion processing by showing that the brain not only binds bodies into movements (Giese & Poggio, 2003;Lange & Lappe, 2006) but also binds movements into groups. These results have important implications for understanding group alignment and its social consequences and provide insight into the neural mechanisms through which observers bind interacting individuals into ensembles shaping perception and memory.

Experiment 1
Participants. Ten healthy volunteers with normal or corrected-to-normal vision participated in the experiment (9 female, M age = 22.33, SD age = 2.12, range age = 19-26). While this allows us to detect only effects d z ≥ 0.72 (Lakens et al., 2018), such effects are to be expected considering the high signal-to-noise ratio of EEG frequency tagging (Norcia et al., 2015;Regan, 1966). All participants signed an informed consent before the experiment and were paid 10 Euros in exchange for their participation. Experiment 1 was conducted at the Université Catholique Louvain and was approved by the local ethics committee.

Task, Stimuli, and Procedure. The experiment was programmed in MatLab 2009
(Psychtoolbox). In the experiment, participants saw repeating apparent biological motion sequences consisting of 12 grey-scale body presented on a grey background (Figure 1). There were two experimental conditions (fluent and non-fluent) and one control condition (random).
In random sequences, images were presented at random. In contrast, fluent and non-fluent sequences both had a fixed, symmetrical structure in which the second half of the sequence mirrored the first half of the sequence presented in reversed order. In fluent sequences, images were arranged to form a rhythmical dance movement representing a dancer moving from left to right and back from right to left. In the non-fluent condition, these same 12 images were rearranged into a sequence with maximum visual displacement between successive body postures. As a result, even though both sequences were symmetrical, this symmetrical structure was salient only in the fluent sequences (Orgs et al., , 2013(Orgs et al., , 2016. The three conditions were presented block-wise in randomized order, with 5 blocks per condition. Each block consisted of a 120s video with a 10s fade in and 10s fade out. The fluent and non-fluent videos were created by repeating the corresponding 12-image sequence 2 4 100 times and the random videos by presenting a random 1200-image sequence (Video S1-S3). To maintain attention, participants were instructed to fixate on a grey cross in the center of the screen and to press the space bar each time its color changed briefly (200 ms) to red (Rossion et al., 2012).
Importantly, the symmetrical structure of the fluent and non-fluent sequences produces three hierarchical levels of frequency tagging (Figure 1). The first level is driven by the fact that presenting images at 10 Hz produces a 10 Hz base rate response in the EEG signal primarily representing the processing of image onset. In contrast, the second and third levels are driven not by image presentation but by the stimulus structure. The second level reflects the fact that every 6 th image (half cycle, at 1.67 Hz) signals a turning point in the sequence and the third level that every 12 th image (full cycle, at 0.83 Hz) signals the repetition of the complete sequence. Previous research on apparent biological motion has shown that fluent sequences are primarily seen as movements, whereas non-fluent sequences are primarily seen as a series of body postures (Orgs et al., , 2013(Orgs et al., , 2016Shiffrar & Freyd, 1990). In our task, movements repeat at half cycle, whereas body posture sequences repeat at full cycle. Therefore, if our task measures the integration of body postures into movements (Giese & Poggio, 2003), fluent sequences should drive brain activity at half cycle frequencies and non-fluent sequences at full cycle frequencies. Moreover, in the random condition, where neither posture sequences nor movements repeat, both half-and full cycle responses should be absent.
EEG recording and preprocessing. EEG was recorded from 128 Ag/AgCl active electrodes using a Biosemi EEG system and a sampling rate of 512 Hz. Vertical and horizontal eye movement were measured using four additional electrodes placed on the outer canthus of each eye and in the inferior and superior areas of the right orbit. During EEG recording, all electrodes were referenced to AFz, and electrode impedances were kept below 2 5 10 kΩ. All EEG data was offline processed using Letswave 6 (https://www.letswave.org/).
Raw data was offline band-pass filtered using a fourth-order Butterworth filter with cut-off values of 0.1 -100 Hz and segmented according to the experimental conditions (-2 to 122 s).
Next, eye movement artefacts were removed by applying ICA on the merged segmented data.
Specifically, we analyzed the first 10 components and removed one component for blinks and one or two components related to eye movements. After ICA, faulty or excessively noisy electrodes (< 1% on average) were interpolated using data from the three closest neighboring computed z-scores for each frequency bin using the 20 surrounding bins except adjacent bins as a baseline, and then selected the first 10 harmonics with z > 2.32 (i.e., p one-tailed < 0.01; Retter & Rossion, 2016). Importantly, the three frequencies are harmonically related.
Therefore, to minimize overlap, the full cycle response was calculated using only those harmonics that did not overlap with the half cycle harmonics (i.e., the odd harmonics) and the half cycle response using only those harmonics that did not overlap with the base rate harmonics. Accordingly, the full cycle response was calculated as the sum of the amplitudes at 0. Hz. Importantly, because baseline-subtracted amplitudes were used, the summed response in the absence of signal is expected to be 0 (Retter & Rossion, 2016).
To prevent selection bias, the electrodes entered into the analysis were chosen by averaging the topographies of each response across participants and conditions (Luck & Gaspelin, 2017). This revealed four clusters: a middle posterior cluster with a maximum at Oz, two lateral posterior clusters with maxima at PO7 and PO8, and a frontocentral cluster with a maximum at FCz (Figure 2). The response in each cluster was quantified by taking 5 electrodes centered around the maximum electrode.
The resulting data for each response was analyzed with a condition (fluent, non-fluent, or random) x region (left posterior, middle posterior, right posterior, or middle central) repeated measures ANOVA. ANOVA degrees of freedom were corrected for violation of sphericity using the Greenhouse-Geisser correction whenever Mauchly's sphericity test was significant (p < .05). Unless otherwise specified, all t tests are two-tailed. To further quantify the evidence, all t tests are accompanied by Bayes Factors (BFs), calculated with a noninformative Jeffreys prior on the variance and a Cauchy prior with 0 as center and 1 as scale (Rouder et al., 2012). We chose a scale of 1 ("wide prior") because we expected large effect sizes.

Experiments 2-3
Participants. priori power analysis indicating that 19 participants were necessary to obtain 90% power to detect effect sizes one-third the size of the half cycle fluency effect in Experiment 1.
In both experiments, 1 participant had to be excluded because large artefacts across the scalp and throughout the entire experiment made the data uninterpretable. Therefore, the final sample in both experiments comprised 19 participants. All participants signed an informed consent prior to the experiment and were paid £20 in exchange for their participation. Both experiments were conducted at Goldsmiths College, University of London and were approved by the local ethics committee.

Task, Stimuli, and Procedure.
Both experiments were programmed in PsychoPy (Peirce et al., 2019). The overall procedure was similar to Experiment 1, except that participants now saw not one but four agents, organized in a square grid around the fixation cross, performing fluent or non-fluent movements (Experiment 2) in upright or inverted position (Experiment 3) either in or out of synchrony ( Figure 1). Movement fluency was manipulated in the same way as Experiment 1 and synchrony was manipulated by making the agents start from the same (i.e., synchrony) or from different (i.e., asynchrony) starting positions in the sequence (Wilson & Gos, 2019). The 4 starting positions in the asynchronous condition were chosen to maximize perceived asynchrony and were the same for all participants. However, which agent started at which of these 4 positions was counterbalanced across participants.
Experiments 2-3 also used a different presentation rate than Experiment 1. That is, instead of presenting images at a rate of 10 Hz, we used a presentation rate of 7.5 Hz. This was done for two reasons. First, because we wanted to test the degree to which the results of Experiment 1 could be generalized to different frequencies and, second, because a slower presentation rate made the asynchronous condition appear less synchronous. In line with Experiment 1, all conditions were presented block-wise in randomized order, with 5 blocks 2 8 per condition. However, in contrast to Experiment 1, blocks consisted not of a 120s video but of a 128s video with an 8s fade in and a 8s fade out period. Videos were presented on a white background and were created by repeating the relevant 12-image sequence 80 times (Video S5-S6). To maintain attention and minimize eye movements, participants were asked to focus on a black fixation cross in the center of the screen and to press the space bar each time its color changed briefly (267 ms) to red (Rossion et al., 2012). Before the experiment proper, participants completed 1 practice block where the body postures of all four agents were presented at random, similar to the random condition of Experiment 1. Finally, after the experiment, participants did a brief rating task where they saw a shortened (25s) video of each condition and were asked to rate the synchrony and complexity of the video as well as how much they liked it, on a scale ranging from 0 to 100.
EEG recording and preprocessing. EEG was recorded from 64 Ag/AgCl active electrodes using a Biosemi EEG system and a sampling rate of 512 Hz. Vertical and horizontal eye movement were measured using four additional electrodes placed on the outer canthus of each eye and in the inferior and superior areas of the left orbit. During EEG recording, all electrodes were referenced to two electrodes placed on the left and right ear lobes and electrode impedances were kept below 10 kΩ. All EEG data was offline processed using Letswave 6 (https://www.letswave.org/). Raw data was offline band-pass filtered using a fourth-order Butterworth filter with cut-off values of 0.1 -100 Hz and segmented according to the experimental conditions (-2 to 130s). Next, eye movement artefacts were removed by applying ICA on the merged segmented data, using the same approach as in Experiment 1.
After ICA, faulty or excessively noisy electrodes (< 1% on average) were interpolated using data from the three closest neighboring electrodes. In addition, in Experiment 2, a complete block was discarded for a single participant because a large artefact across the scalp disproportionally biased the signal. The signal was then re-referenced with respect to the 2 9 average of all electrodes, before cropping the segments into 112s epochs (8 to 120s). At 7.5 Hz, this ensures that all relevant harmonics are multiples of the epoch duration. Finally, the trials within each condition were averaged and a Fast Fourier Transform (FFT) was applied to transform the data of each electrode to normalized (divided by N/2) amplitudes (µV) in the frequency domain (from 0 to 256 Hz).

EEG analysis.
The data was analyzed in the same way as in Experiment 1. Note that this means that the base rate response was now calculated as the sum of the 10 rather than 8 first harmonics because all base rate harmonics were now < 100 Hz. We analyzed activity in the same four electrode clusters as in Experiment 1. However, since we had only 64 instead of 128 electrodes, we used not 5 but 3 electrodes centered around Oz, PO7, PO8, and FCz