Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

The neural dynamics underlying prioritisation of task-relevant information

View ORCID ProfileTijl Grootswagers, View ORCID ProfileAmanda K. Robinson, Sophia M. Shatek, View ORCID ProfileThomas A. Carlson
doi: https://doi.org/10.1101/2020.06.25.172643
Tijl Grootswagers
aWestern Sydney University, The MARCS Institute for Brain, Behaviour and Development, NSW, Australia
bSchool of Psychology, University of Sydney, NSW, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Tijl Grootswagers
  • For correspondence: t.grootswagers@westernsydney.edu.au
Amanda K. Robinson
bSchool of Psychology, University of Sydney, NSW, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Amanda K. Robinson
Sophia M. Shatek
bSchool of Psychology, University of Sydney, NSW, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Thomas A. Carlson
bSchool of Psychology, University of Sydney, NSW, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Thomas A. Carlson
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Data/Code
  • Preview PDF
Loading

Abstract

The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural coding of each individual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention enhanced stimulus-related information contained in the neural responses, but this enhancement was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable detail about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/.

Introduction

To efficiently perform a task, our brains continuously prioritise and select relevant information from a constant stream of sensory input. All sensory input is automatically and unconsciously processed, but the depth of processing varies depending on the task and input characteristics (Grootswagers et al., 2019a; King et al., 2016; Mohsenzadeh et al., 2018; Robinson et al., 2019; Rossion et al., 2015; Rousselet et al., 2002). At what stage in the response is task-relevant information prioritised? Neurophysiological methods such as electroencephalography (EEG) and magnetoencephalography (MEG) have offered insight into the time-scales at which selective attention operates in the human brain. For example, a stimulus that is presented in an attended location evokes a stronger neural response around 100ms (e.g., Mangun, 1995; Mangun et al., 1993). Similarly, when a certain feature of a stimulus is attended, the neural coding of this feature is enhanced (Martinez-Trujillo and Treue, 2004; Maunsell and Treue, 2006), with enhancements for basic features (e.g., colour) starting as early as 100ms (e.g., Zhang and Luck, 2009). Feature-selective attention, however, has been found to influence later stages of processing, after 300ms (Goddard et al., BioRxiv). In a sequence of stimuli, temporal selection of task-relevant target stimuli is reported around 270ms (Kranczioch et al., 2005, 2003; Marti and Dehaene, 2017; Sergent et al., 2005; Tang et al., 2019). A question that has received considerably less focus is how these mechanisms interact in situations where multiple stimuli are presented in the same location and at the same time. Determining the stages of processing affected by attention in these situations is important for understanding selective attention as a whole, and for constructing an overarching theory of attention.

Studying neural responses to simultaneously presented stimuli is difficult, as the stimulus-specific signals are overlapping. One solution is to display stimuli at different presentation rates and analyse neural responses in the matching frequency bands (e.g., Ding et al., 2006; Müller et al., 2006), but this approach does not allow studying the underlying temporal dynamics. Another approach is to use multivariate decoding methods, which have recently provided new opportunities to study attentional effects on information at the individual stimulus level (e.g., Alilović et al., 2019; Goddard et al., 2019; Marti and Dehaene, 2017; Smout et al., 2019). These methods also allow to decode the overlapping neural signals evoked by stimuli presented close in time (e.g., Grootswagers et al., 2019a; Marti and Dehaene, 2017; Robinson et al., 2019), even when these stimuli are not task-relevant (Grootswagers et al., 2019b; Marti and Dehaene, 2017; Robinson et al., 2019). Multivariate decoding methods can therefore be used to disentangle information from simultaneously presented stimuli and investigate the temporal dynamics of attentional mechanisms operating on the stimuli.

We conducted two experiments to investigate the effect of attention on the representations of simultaneously presented objects and letters. Participants were shown images of objects overlaid with alphabet letters, or vice versa, in rapid succession and performed a cognitively demanding 2-back task on either the object or the letters, which required attending to one of the two simultaneously presented stimuli. We then performed a multivariate decoding analysis on all non-target object and letter stimuli in the presentation streams and examined the differences between the two task conditions. In both experiments, we found that we could decode all stimuli regardless of whether they were attended, but that attention enhanced the coding of the relevant stimulus (object versus letter). In Experiment 1, with small letters overlaid on larger objects, attentional enhancement emerged around 220ms post-stimulus onset for objects, but for letters the difference started earlier, at 100ms post-stimulus onset. In a second experiment, we exchanged the position of the stimuli on the display (i.e., letters overlaid with objects) and found that the timing difference reversed accordingly. Our results show how early neural responses to simultaneously presented stimuli are modulated by certain aspects of the stimulus (e.g., size of attended stimulus) as well as our current task and attentional focus.

Methods

We performed two experiments that investigated the effect of attention on the representations of non-target stimuli during rapid serial visual presentation streams. Unless stated otherwise, the description of the methods below applies to both experiments. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/

Stimuli & Design

Stimuli consisted of 16 visual objects and 16 uppercase letters (ABCDEFGJKLQRTUVY). The visual objects were coloured segmented objects obtained from www.pngimg.com spanning four categories (birds, fish, boats, and planes) with 4 images in each category. The categories could also be assigned to a two-way superordinate organisation (i.e., animals versus vehicles). In Experiment 1, we superimposed one of 16 uppercase letters (approx. 0.8 degrees visual angle) in white font on a black circular background (Figure 1B&C) on top of the visual object stimuli (approx. 3.3 degrees visual angle). In Experiment 2, we superimposed the visual object stimuli (approx. 1.7 degrees visual angle) on one of the 16 uppercase letters (approx. 3.3 degrees visual angle) in white font on a black circular background (Figure 1D&E). Stimuli were presented in sequences of 36 (two repeats of each stimulus plus two two-back targets) for 200ms each, followed by a blank screen for 200ms. In other words, using a 2.5Hz presentation rate and a 50% duty-cycle. In alternating sequences of stimuli, participants were instructed to attend the objects or the letters and perform a cognitively demanding two-back task. Participants pressed a button whenever the stimulus they were attending to (object or letter) was the same as the stimulus that appeared two images beforehand.

Figure 1.
  • Download figure
  • Open in new tab
Figure 1. Stimuli and design.

A) Stimuli were 16 segmented objects spanning four categories (birds, fish, boats, planes) and two superordinate categories (animals and vehicles). Stimuli were presented in sequences at 2.5Hz (200ms on, 200ms off) and in each sequence, participants performed a two-back task on either the objects or on the letters. B) In the object task, participants responded with a button press when an object image was the same as the second-to-last image (two-back), while ignoring the letters. C) In the letter task, participants ignored the object images and responded on a two-back letter repeat. D, E) In the second experiment, the position of the letter and objects were swapped while keeping all other details the same.

We constructed 48 sequences of 32 simultaneous object and letter combinations. A sequence of stimuli was constructed by concatenating two sets of random permutations of 16 items (representing the stimuli), with the constraint that there were no repeats amongst the middle 8 items. We selected two random positions for target placement, one in the first half of each sequence and one in the second half of each sequence and inserted a target before and after the chosen positions, thus creating two-back repeats. The targets were never the same as the nearest three stimuli. Each stimulus was a target equally often. The order of stimuli in each sequence was mirror-copied, so that the order of objects and letters had matching properties while having targets in different positions. The 48 sequences were then presented twice in the experiment in random order (96 sequences in total), once for the object task, and once for the letter task. The task condition of the first sequence was counterbalanced across participants, and the conditions alternated every sequence.

EEG recordings and preprocessing

Participants in Experiment 1 were 20 adults (9 female, 11 male; mean age 24.45 years; age range 19-41 years; all right-handed). Participants in Experiment 2 were 20 adults (17 female, 3 male; mean age 22.45 years; age range 19-36 years; 1 left-handed). All participants reported normal or corrected-to-normal vision and were recruited from the University of Sydney in return for payment or course credit. The study was approved by the University of Sydney ethics committee and informed consent was obtained from all participants. During EEG setup, participants practiced on example sequences of the two-back task. Continuous EEG data were recorded from 64 electrodes arranged according to the international standard 10–10 system for electrode placement (Jasper, 1958; Oostenveld and Praamstra, 2001) using a BrainVision ActiChamp system, digitized at a 1000-Hz sample rate. Scalp electrodes were referenced online to Cz. We used the same preprocessing pipeline as earlier work that applied MVPA to rapid serial visual processing paradigms (Grootswagers et al., 2019a, 2019b; Robinson et al., 2019). Preprocessing was performed offline using EEGlab (Delorme and Makeig, 2004). Data were filtered using a Hamming windowed FIR filter with 0.1Hz highpass and 100Hz lowpass filters, re-referenced to an average reference, and were downsampled to 250Hz. No further preprocessing steps were applied, and the channel voltages at each time point were used for the remainder of the analysis. Epochs were created for each stimulus presentation ranging from [-100 to 1000ms] relative to stimulus onset. Target epochs (task-relevant two-back events) were excluded from the analysis.

Decoding analysis

To assess the representations of attended and unattended stimuli in the neural signal, we applied an MVPA decoding pipeline (Grootswagers et al., 2017) to the EEG channel voltages. The decoding analyses were implemented in CoSMoMVPA (Oosterhof et al., 2016). A regularised linear discriminant analysis classifier was used in combination with an exemplar-by-sequence-cross-validation approach. Decoding was performed within subject, and the subject-averaged results were analysed at the group level. This pipeline was applied to each stimulus in the sequence to investigate object representations in fast sequences under different task requirements. For all sequences, we decoded the 16 different object images, and the 16 different letters. We averaged over all pairwise decoding accuracies (i.e., bird 1 vs fish 1, bird 1 vs boat 4, bird 1 vs plane 1 etc.), such that chance-level was 50%. The analysis was performed separately for sequences from the two conditions (object task and letter task), resulting in a total of four time-varying decoding series of data per participant. For these analyses, we used a leave-one-sequence-out cross-validation scheme, where all epochs from one sequence were used as test set. We report the mean cross-validated decoding accuracies.

To determine the effect of attention on higher-level image processing, we also decoded the category (bird, fish, boat, plane) and animacy (animal versus vehicle) of the visual objects. For these categorical contrasts, we used an image-by-sequence-cross-validation scheme so that identical images were not part of both training and test set (Carlson et al., 2013; Grootswagers et al., 2019a, 2017). This was implemented by holding out one image from each category in one sequence as test data and training the classifier on the remaining images from the remaining sequences. This was repeated for all possible held-out pairs and held out sequences. The analyses were performed separately for the object and letter conditions.

Exploratory channel-searchlight

We performed an exploratory channel-searchlight analysis to further investigate which features (channels) of the EEG signal were driving the classification accuracies. For each EEG channel, a local cluster was constructed by taking the closest four neighbouring channels, and the decoding analyses were performed on the signal of only these channels. The decoding accuracies were stored at the centre channel of the cluster. This resulted in a time-by-channel map of decoding for each of the contrasts, and for each subject.

Statistical inference

We assessed whether stimulus information was present in the EEG signal by comparing classification accuracies to chance-level. To determine evidence for above chance decoding and evidence for differences in decoding accuracies between conditions we computed Bayes factors (Dienes, 2011; Jeffreys, 1961; Rouder et al., 2009; Wagenmakers, 2007). For the alternative hypothesis of above-chance decoding, a JZS prior was used with default scale factor 0.707 (Jeffreys, 1961; Rouder et al., 2009; Wetzels and Wagenmakers, 2012; Zellner and Siow, 1980). The prior for the null hypothesis was set at chance level. We then calculated the Bayes factor (BF), which is the probability of the data under the alternative hypothesis relative to the null hypothesis. For visualisation, we thresholded BF > 10 as substantial evidence for the alternative hypothesis, and BF < 1/3 as substantial evidence in favour of the null hypothesis (Jeffreys, 1961; Wetzels et al., 2011). In addition, we computed frequentist statistics for decoding against chance, and for testing for non-zero differences in decoding accuracies. At each time point, a Wilcoxon sign-rank test was performed for decoding accuracies against chance (one-tailed), and for the difference between conditions (two-tailed). To correct for multiple comparisons across time points, we computed FDR-adjusted p-values (Benjamini and Hochberg, 1995; Yekutieli and Benjamini, 1999).

Results

We examined the temporal dynamics of visual processing for attended (task-relevant) versus unattended (task-irrelevant) stimuli that were spatially and temporally overlapping. Participants performed a difficult two-back target detection task on objects or letters simultaneously presented at fixation. Behavioural performance (Figure 2) was similar for detection of object (mean 0.51, SE 0.03) and letter (mean 0.54, SE 0.04) targets in Experiment 1 (Figure 2A) and higher for the letter (mean 0.65, SE 0.04) than the object (mean 0.57, SE 0.04) targets in Experiment 2 (Figure 2B). Bayes Factors indicated weak evidence for no difference in performance between task contexts in Experiment 1 (Figure 2A), and evidence for better performance on the letter task in Experiment 2 (Figure 2B).

Figure 2.
  • Download figure
  • Open in new tab
Figure 2. Behavioural performance was similar between the object and letter tasks.

A) Hit rate for all subjects in Experiment 1 defined as the proportion of correctly identified 2-back events. B) Hit rate for all subjects in Experiment 2. Bars show mean and standard error. Each dot represents the hit rate of one subject in one condition (object or letter task). Overall, Bayes Factors (displayed above the x-axis) indicated evidence for better performance on the letter tasks.

To investigate the temporal dynamics of processing for attended and unattended stimuli, we decoded the object images and letters in every sequence, separately for the object task and letter task sequences. Figure 3 shows that objects and letters were decodable regardless of whether the stimulus was attended or not, but that attention enhanced both object and letter processing. For objects, decodability was higher in the object task (task-relevant) relative to the letter task (task-irrelevant), an effect that emerged after 200ms and remained until around 600ms (Figure 3A). For letter decoding, performance was higher for the letter task than for the object task from 100ms to approximately 600ms (Figure 3C). In Experiment 2, we exchanged the position of the object and letters on the screen, so that the letters were presented larger and overlaid with a small object at fixation. Here, attention similarly enhanced object and letter processing, but attention effects occurred at different times. The attentional enhancement for objects emerged after 180ms and remained until approximately 600ms (Figure 3B), and for letters occurred from 220ms to around 600ms (Figure 3D). To combine the results from both experiments, Figure 4 shows the effect of attention (i.e., the differences between decoding accuracies for attended and unattended) for objects and letters in both experiments. Combining the results from both experiments (summarised in Figure 5) shows that the attention effect started earlier for the smaller item in the display (i.e., letters in Experiment 1 and objects in Experiment 2). This suggests that mechanisms for attentional enhancement are modulated by the relative retinal projection of the stimulus. The exploratory channel searchlight for object decoding (Figure 4A) suggested that the stronger coding in the attended condition was right-lateralised. Letter decoding channel searchlights (Figure 4B) showed a more left-lateralised difference in the attended condition. Together, these analyses suggest that attentional effects were lateralised differently between objects and letters. The results presented in Figures 3 and 4 are summarised in Figure 5.

Figure 3.
  • Download figure
  • Open in new tab
Figure 3. Different effects of attention on decoding performance for objects and letters.

Plots show decoding performance over time for object decoding (A&B) and letter decoding (C&D). Different lines in each plot show decoding accuracy during different tasks over time relative to stimulus onset, with shaded areas showing standard error across subjects (N = 20). Their time-varying topographies are shown below each plot, averaged across 100ms time bins. Thresholded Bayes factors (BF) and p-values for above-chance decoding or non-zero differences are displayed under each plot. For both objects and letters, decodability was higher when they were task-relevant, but the respective time-courses of these differences varied.

Figure 4.
  • Download figure
  • Open in new tab
Figure 4. Aggregating the attention effect over the two experiments shows the interaction between task and (relative) stimulus size.

Plots shows the difference in decoding performance between task-relevant and task-irrelevant object decoding (A) and letter decoding (B). Each line reflects the mean difference from one of the two experiments relative to stimulus onset, with shaded areas showing standard error across subjects (N = 20). Their time-varying topographies are shown below each plot, averaged across 50ms time bins. Thresholded Bayes factors (BF) and p-values for above-chance decoding or non-zero differences are displayed under each plot. Note that these are the same as the stats for the non-zero difference in Figure 3&4. For both objects and letter stimuli, the onsets of the task-effect (relevant-irrelevant) were earlier when the stimulus was smaller.

Figure 5.
  • Download figure
  • Open in new tab
Figure 5. Summary of main findings.

The top row (A,B) shows the significant time points for each contrast. The bottom row (C,D) shows the time of the peak (denoted by x) accompanied by the distribution of peak times obtained by drawing 10,000 samplings from the subjects with replacement. Left columns (A,C) show results for decoding against chance, and right columns (B,D) show the difference between attended and unattended decoding.

To assess the effect of attention on higher-level processes, we also performed object category decoding (e.g., bird versus fish) and animacy decoding (animals versus vehicles). For both contrasts, decodable information was evident in the neural signal when objects were both attended and unattended, but there was more information when they were attended. Figure 6 shows that animacy and category decoding were higher for the object task compared with the letter task. Animacy (animal versus vehicle) was more decodable during the object task than the letter task from approximately 450-550ms in Experiment 1 (Figure 6A) and around 300ms in Experiment 2 (Figure 6B). In both Experiments, object category (e.g., bird versus fish) was more decodable from approximately 200ms (Figure 6C-D). These results show attentional enhancement for the more abstract categorical object information when the stimulus was relevant for the current task, but with differential effects of attention depending on the size of the stimuli.

Figure 6.
  • Download figure
  • Open in new tab
Figure 6. Effect of attention on higher level categorical object contrasts in Experiment 1 were similar to individual object decoding.

Plots show decoding performance over time for object animacy decoding (A) and object category decoding (B). Different lines in each plot show decoding accuracy during different tasks over time relative to stimulus onset, with shaded areas showing standard error across subjects (N = 20). Their time-varying topographies are shown below each plot, averaged across 100ms time bins. Thresholded Bayes factors (BF) and p-values for above-chance decoding or non-zero differences are displayed under each plot.

Discussion

In this study, we asked how attention modulates the representations of visual stimuli. Participants monitored streams of letters overlaid on objects (Experiment 1) or objects overlaid on letters (Experiment 2) and performed a 2-back target detection task on either the letters or the objects. Importantly, we did not analyse the responses to the 2-back targets, but rather investigated how the spatial task context influenced the representation of all other stimuli in the streams. Remarkably, we could decode all attended and unattended stimuli in both experiments, even though they were spatially and temporally overlapping, but the dynamics of the representations varied according to the task and the size of the stimuli. Overall, we found that attending to objects enhanced the neural representations of objects and that attending to letters enhanced the neural representations of letters. The time course of these attentional effects varied, however, such that the enhancement of task-relevant information emerged after 200ms for large stimuli, but before 200ms for small stimuli (Figure 5). Taken together, these findings show that task context selectively enhances the processing of relevant visual stimuli, and that this enhancement is specific to the features of the stimuli being selected.

All stimuli in this study evoked distinct patterns of neural responses regardless of whether they were relevant to the task at hand. That is, letters and objects were decodable in all conditions. This fits with our previous work showing that task-irrelevant objects can be decoded from rapid streams (Grootswagers et al., 2019a; Robinson et al., 2019), likely reflecting a degree of automaticity in visual processing and confirming that target selection is not a requirement for stimulus processing. The current study extends these findings by showing that two simultaneously presented visual objects are decodable even when one stimulus is much less prioritised than the other due to task demands and stimulus size. Strikingly, the duration of above chance decoding was much longer than the stimulus presentation time, and for objects, this was the case even when the stimulus was not attended. For example, unattended object information was above chance for up to 900ms post stimulus-onset in Experiment 1 (Figure 3A), and up to 600ms in Experiment 2, when the objects were smaller (Figure 3B). This shows that visual information was maintained in the system even though it was not task relevant and it was presented in conjunction with a task-relevant stimulus. Thus, task-irrelevant information appeared to reach higher levels of processing than just feature processing, even though it was not the subject of attention. Indeed, category and animacy decoding (Figure 6) suggests that object stimuli were processed up to abstract levels in the visual hierarchy. In sum, all objects and letters were decodable even during fast-changing visual input and even when they were not attended. Importantly, however, we found that attention enhanced the distinctiveness (i.e., decodability) of the attended visual stimuli.

Attention affected both the strength and duration of evoked visual representations. For both letters and objects, decodability was higher and prolonged when they were task-relevant compared to when they were irrelevant. This is particularly striking because the letter and object tasks involved exactly the same sequences of images and analyses, so differences in decoding arise exclusively from the attentional focus imposed by the task that participants performed. Furthermore, it is important to note that target images (i.e., the two repeating stimuli) were not analysed, meaning that target selection and response processes were not contained within our results. The difference we observed thus can mainly be attributed to attentional mechanisms. The enhancement of attended object information around 220ms is consistent with evidence of effects from the attentional blink and target selection literature, which has often reported differences in N2 and P300 ERP components (Kranczioch et al., 2007, 2003; Sergent et al., 2005). Target stimuli in rapid streams have been found to evoke stronger signals around 220ms (Marti and Dehaene, 2017). In these designs, however, it is difficult to distinguish between the effects of target-selection and the enhancement of task-relevant information. As all our analyses were performed on non-target stimuli, our results point towards a general enhancement of task-relevant stimuli at this time scale, even for images that are not selected for target-processing. This points towards a more general enhancement effect of task-relevant information occurring around 220ms that supports flexible task performance in many paradigms.

Attentional enhancement of the letter stimuli followed a different trajectory to that of the objects, with an onset around 100ms for letters versus 220ms for objects in Experiment 1. This could be explained by the letters comprising a smaller part of the stimulus arrangement. Previous work has shown effects of eccentricity on neural responses (e.g., Eimer, 2000; Isik et al., 2014; Müller and Hübner, 2002), but our results could also be attributed to differences in spatial attention allocated to the letter versus image task. Indeed, when we exchanged the stimulus position in Experiment 2, we observed an earlier onset of the attentional effects on object decoding, but the effect for letters seemed to occur later. Channel searchlight analyses further suggested that the attentional enhancement was more left lateralised for the letter task, and right lateralised for the object task. Letter processing is typically left lateralised (Cohen et al., 2003; Puce et al., 1996), whereas animate objects tend to evoke right hemisphere dominant responses (Bentin et al., 1996; Puce et al., 1996, 1995). The different spatio-temporal dynamics between the enhanced coding of relevant information between the object and letter tasks suggest that attentional enhancement effects are dependent on perceptual characteristics of the specific stimuli being processed.

For objects and their conceptual category decoding, we found evidence for no attentional effect on the initial responses (until around 180ms). This is consistent with recent work that reported no evidence for attentional effects on early visual ERP components or decoding accuracies (Alilović et al., 2019; Baumgartner et al., 2018). In contrast, we did find attentional effects on decoding accuracy for the earliest responses to letters (Figure 3C), which were more decodable throughout the epochs when task relevant. One explanation of this difference is that objects are automatically and unconsciously processed, but letters may require an active recruitment of their respective processing mechanisms. Alternatively, the object stimuli used here are visually much more distinct (different colours and shapes) than the letter stimuli which facilitates decoding of visual feature differences.

In addition to the stronger decoding for attended images, our results also suggest that attended stimuli were decodable for longer relative to unattended stimuli. For example, above-chance letter decoding in the task-irrelevant condition lasted roughly 100ms, while in the task-relevant condition, it lasted around 600ms. One possible explanation is that attention enhanced the processing of each individual stimulus so that each stimulus was processed up to a higher-level in the visual hierarchy. Alternatively, it could be a function of the task itself, as the two-back task required participants to remember the object for two subsequent presentations. Therefore, the prolonged decodability of attended stimuli could also reflect the requirement to hold the image in working memory. Future work could explore this idea further by manipulating the memory requirement, for example through changing the presentation speed of the streams, or by contrasting a one-back versus a two-back task.

In conclusion, we found that attention enhances the representations of task-relevant visual stimuli, even when they were spatially and temporally overlapping with task-irrelevant stimuli, and even when the stimuli were not selected as target. Our results suggest that attentional enhancement effects operate on the specific perceptual processing mechanisms of the stimulus, differing across stimulus type and size. This points towards a multi-stage implementation of information prioritisation that guides early perceptual processes, as well as later-stage mechanisms.

Acknowledgements

This research was supported by ARC DP160101300 (TAC), ARC DP200101787 (TAC), and ARC DE200101159 (AKR). The authors acknowledge the University of Sydney HPC service for providing High Performance Computing resources. The authors declare no competing financial interests.

Footnotes

  • https://osf.io/7zhwp/

References

  1. Alilovic, J., Timmermans, B., Reteig, L.C., van Gaal, S., Slagter, H.A., 2019. No Evidence that Predictions and Attention Modulate the First Feedforward Sweep of Cortical Information Processing. Cereb. Cortex N. Y. N 1991. https://doi.org/10.1093/cercor/bhz038
  2. ↵
    Baumgartner, H.M., Graulty, C.J., Hillyard, S.A., Pitts, M.A., 2018. Does spatial attention modulate the earliest component of the visual evoked potential? Cogn. Neurosci. 9, 4–19. https://doi.org/10.1080/17588928.2017.1333490
    OpenUrl
  3. ↵
    Benjamini, Y., Hochberg, Y., 1995. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J. R. Stat. Soc. Ser. B Methodol. 57, 289–300.
    OpenUrl
  4. ↵
    Bentin, S., Allison, T., Puce, A., Perez, E., McCarthy, G., 1996. Electrophysiological Studies of Face Perception in Humans. J. Cogn. Neurosci. 8, 551–565. https://doi.org/10.1162/jocn.1996.8.6.551
    OpenUrlCrossRefPubMedWeb of Science
  5. ↵
    Carlson, T.A., Tovar, D.A., Alink, A., Kriegeskorte, N., 2013. Representational dynamics of object vision: The first 1000 ms. J. Vis. 13, 1. https://doi.org/10.1167/13.10.1
    OpenUrlAbstract/FREE Full Text
  6. ↵
    Cohen, L., Martinaud, O., Lemer, C., Lehéricy, S., Samson, Y., Obadia, M., Slachevsky, A., Dehaene, S., 2003. Visual Word Recognition in the Left and Right Hemispheres: Anatomical and Functional Correlates of Peripheral Alexias. Cereb. Cortex 13, 1313–1333. https://doi.org/10.1093/cercor/bhg079
    OpenUrlCrossRefPubMedWeb of Science
  7. ↵
    Delorme, A., Makeig, S., 2004. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134, 9–21. https://doi.org/10.1016/j.jneumeth.2003.10.009
    OpenUrlCrossRefPubMedWeb of Science
  8. ↵
    Dienes, Z., 2011. Bayesian Versus Orthodox Statistics: Which Side Are You On? Perspect. Psychol. Sci. 6, 274–290. https://doi.org/10.1177/1745691611406920
    OpenUrlCrossRefPubMed
  9. ↵
    Ding, J., Sperling, G., Srinivasan, R., 2006. Attentional Modulation of SSVEP Power Depends on the Network Tagged by the Flicker Frequency. Cereb. Cortex 16, 1016–1029. https://doi.org/10.1093/cercor/bhj044
    OpenUrlCrossRefPubMedWeb of Science
  10. ↵
    Eimer, M., 2000. An ERP study of sustained spatial attention to stimulus eccentricity. Biol. Psychol. 52, 205–220. https://doi.org/10.1016/S0301-0511(00)00028-4
    OpenUrlCrossRefPubMedWeb of Science
  11. ↵
    Goddard, E., Carlson, T.A., Woolgar, A., 2019. Spatial and feature-selective attention have distinct effects on population-level tuning. bioRxiv 530352. https://doi.org/10.1101/530352
  12. ↵
    Grootswagers, T., Robinson, A.K., Carlson, T.A., 2019a. The representational dynamics of visual objects in rapid serial visual processing streams. NeuroImage 188, 668–679. https://doi.org/10.1016/j.neuroimage.2018.12.046
    OpenUrl
  13. ↵
    Grootswagers, T., Robinson, A.K., Shatek, S.M., Carlson, T.A., 2019b. Untangling featural and conceptual object representations. NeuroImage 202, 116083. https://doi.org/10.1016/j.neuroimage.2019.116083
    OpenUrl
  14. ↵
    Grootswagers, T., Wardle, S.G., Carlson, T.A., 2017. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data. J. Cogn. Neurosci. 29, 677–697. https://doi.org/10.1162/jocn_a_01068
    OpenUrlCrossRefPubMed
  15. ↵
    Isik, L., Meyers, E.M., Leibo, J.Z., Poggio, T., 2014. The dynamics of invariant object recognition in the human visual system. J. Neurophysiol. 111, 91–102. https://doi.org/10.1152/jn.00394.2013
    OpenUrlCrossRefPubMedWeb of Science
  16. ↵
    Jasper, H.H., 1958. The ten-twenty electrode system of the International Federation. Electroencephalogr Clin Neurophysiol 10, 371–375.
    OpenUrlCrossRef
  17. ↵
    Jeffreys, H., 1961. Theory of probability. Oxford University Press.
  18. ↵
    King, J.-R., Pescetelli, N., Dehaene, S., 2016. Brain mechanisms underlying the brief maintenance of seen and unseen sensory information. Neuron 92, 1122–1134.
    OpenUrl
  19. Kranczioch, C., Debener, S., Engel, A.K., 2003. Event-related potential correlates of the attentional blink phenomenon. Cogn. Brain Res. 17, 177–187. https://doi.org/10.1016/S0926-6410(03)00092-2
    OpenUrlCrossRefPubMed
  20. ↵
    Kranczioch, C., Debener, S., Maye, A., Engel, A.K., 2007. Temporal dynamics of access to consciousness in the attentional blink. Neuroimage 37, 947–955.
    OpenUrlCrossRefPubMedWeb of Science
  21. ↵
    Kranczioch, C., Debener, S., Schwarzbach, J., Goebel, R., Engel, A.K., 2005. Neural correlates of conscious perception in the attentional blink. Neuroimage 24, 704–714.
    OpenUrlCrossRefPubMedWeb of Science
  22. ↵
    Mangun, G.R., 1995. Neural mechanisms of visual selective attention. Psychophysiology 32, 4–18. https://doi.org/10.1111/j.1469-8986.1995.tb03400.x
    OpenUrlCrossRefPubMedWeb of Science
  23. ↵
    Mangun, G.R., Hillyard, S.A., Luck, S.J., 1993. Electrocortical substrates of visual selective attention, in: Attention and Performance 14: Synergies in Experimental Psychology, Artificial Intelligence, and Cognitive Neuroscience. The MIT Press, Cambridge, MA, US, pp. 219–243.
  24. ↵
    Marti, S., Dehaene, S., 2017. Discrete and continuous mechanisms of temporal selection in rapid visual streams. Nat. Commun. 8, 1955. https://doi.org/10.1038/s41467-017-02079-x
    OpenUrlCrossRefPubMed
  25. ↵
    Martinez-Trujillo, J.C., Treue, S., 2004. Feature-Based Attention Increases the Selectivity of Population Responses in Primate Visual Cortex. Curr. Biol. 14, 744–751. https://doi.org/10.1016/j.cub.2004.04.028
    OpenUrlCrossRefPubMedWeb of Science
  26. ↵
    Maunsell, J.H., Treue, S., 2006. Feature-based attention in visual cortex. Trends Neurosci. 29, 317–322.
    OpenUrlCrossRefPubMedWeb of Science
  27. ↵
    Mohsenzadeh, Y., Qin, S., Cichy, R.M., Pantazis, D., 2018. Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway. eLife 7, e36329. https://doi.org/10.7554/eLife.36329
    OpenUrl
  28. ↵
    Müller, M.M., Andersen, S., Trujillo, N.J., Valdés-Sosa, P., Malinowski, P., Hillyard, S.A., 2006. Feature-selective attention enhances color signals in early visual areas of the human brain. Proc. Natl. Acad. Sci. 103, 14250–14254. https://doi.org/10.1073/pnas.0606668103
    OpenUrlAbstract/FREE Full Text
  29. ↵
    Müller, M.M., Hübner, R., 2002. Can the Spotlight of Attention Be Shaped Like a Doughnut? Evidence From Steady-State Visual Evoked Potentials. Psychol. Sci. 13, 119–124. https://doi.org/10.1111/1467-9280.00422
    OpenUrlCrossRefPubMedWeb of Science
  30. ↵
    Oostenveld, R., Praamstra, P., 2001. The five percent electrode system for high-resolution EEG and ERP measurements. Clin. Neurophysiol. 112, 713–719. https://doi.org/10.1016/S1388-2457(00)00527-7
    OpenUrlCrossRefPubMedWeb of Science
  31. ↵
    Oosterhof, N.N., Connolly, A.C., Haxby, J.V., 2016. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave. Front. Neuroinformatics 10. https://doi.org/10.3389/fninf.2016.00027
  32. ↵
    Puce, A., Allison, T., Asgari, M., Gore, J.C., McCarthy, G., 1996. Differential Sensitivity of Human Visual Cortex to Faces, Letterstrings, and Textures: A Functional Magnetic Resonance Imaging Study. J. Neurosci. 16, 5205–5215. https://doi.org/10.1523/JNEUROSCI.16-16-05205.1996
    OpenUrlAbstract/FREE Full Text
  33. ↵
    Puce, A., Allison, T., Gore, J.C., McCarthy, G., 1995. Face-sensitive regions in human extrastriate cortex studied by functional MRI. J. Neurophysiol. 74, 1192–1199. https://doi.org/10.1152/jn.1995.74.3.1192
    OpenUrlPubMedWeb of Science
  34. ↵
    Robinson, A.K., Grootswagers, T., Carlson, T.A., 2019. The influence of image masking on object representations during rapid serial visual presentation. NeuroImage 197, 224–231. https://doi.org/10.1016/j.neuroimage.2019.04.050
    OpenUrl
  35. ↵
    Rossion, B., Torfs, K., Jacques, C., Liu-Shuang, J., 2015. Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain. J. Vis. 15, 18–18. https://doi.org/10.1167/15.1.18
    OpenUrlAbstract/FREE Full Text
  36. ↵
    Rouder, J.N., Speckman, P.L., Sun, D., Morey, R.D., Iverson, G., 2009. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 16, 225–237.
    OpenUrlCrossRefPubMed
  37. ↵
    Rousselet, G.A., Fabre-Thorpe, M., Thorpe, S.J., 2002. Parallel processing in high-level categorization of natural images. Nat. Neurosci. 5, 629–630. https://doi.org/10.1038/nn866
    OpenUrlCrossRefPubMedWeb of Science
  38. ↵
    Sergent, C., Baillet, S., Dehaene, S., 2005. Timing of the brain events underlying access to consciousness during the attentional blink. Nat. Neurosci. 8, 1391.
    OpenUrlCrossRefPubMedWeb of Science
  39. ↵
    Smout, C.A., Tang, M.F., Garrido, M.I., Mattingley, J.B., 2019. Attention promotes the neural encoding of prediction errors. PLOS Biol. 17, e2006812. https://doi.org/10.1371/journal.pbio.2006812
    OpenUrl
  40. ↵
    Tang, M.F., Ford, L., Arabzadeh, E., Enns, J.T., Visser, T.A.W., Mattingley, J.B., 2019. Neural dynamics of the attentional blink revealed by encoding orientation selectivity during rapid visual presentation. bioRxiv 595355. https://doi.org/10.1101/595355
  41. ↵
    Wagenmakers, E.-J., 2007. A practical solution to the pervasive problems of p values. Psychon. Bull. Rev. 14, 779–804. https://doi.org/10.3758/BF03194105
    OpenUrl
  42. ↵
    Wetzels, R., Matzke, D., Lee, M.D., Rouder, J.N., Iverson, G.J., Wagenmakers, E.-J., 2011. Statistical Evidence in Experimental Psychology: An Empirical Comparison Using 855 t Tests. Perspect. Psychol. Sci. 6, 291–298. https://doi.org/10.1177/1745691611406923
    OpenUrlCrossRefPubMed
  43. ↵
    Wetzels, R., Wagenmakers, E.-J., 2012. A default Bayesian hypothesis test for correlations and partial correlations. Psychon. Bull. Rev. 19, 1057–1064. https://doi.org/10.3758/s13423-012-0295-x
    OpenUrlCrossRefPubMed
  44. ↵
    Yekutieli, D., Benjamini, Y., 1999. Resampling-based false discovery rate controlling multiple test procedures for correlated test statistics. J. Stat. Plan. Inference 82, 171–196. https://doi.org/10.1016/S0378-3758(99)00041-5
    OpenUrlCrossRef
  45. ↵
    1. Bernardo, J.M.,
    2. DeGroot, M.H.,
    3. Lindley, D.V.,
    4. Smith, A.F.M.
    Zellner, A., Siow, A., 1980. Posterior odds ratios for selected regression hypotheses, in: Bernardo, J.M., DeGroot, M.H., Lindley, D.V., Smith, A.F.M. (Eds.), Bayesian Statistics: Proceedings of the First InternationalMeeting. University of Valencia Press, Valencia, pp. 585–603.
  46. ↵
    Zhang, W., Luck, S.J., 2009. Feature-based attention modulates feedforward visual processing. Nat. Neurosci. 12, 24–25. https://doi.org/10.1038/nn.2223
    OpenUrlCrossRefPubMedWeb of Science
Back to top
PreviousNext
Posted June 26, 2020.
Download PDF
Data/Code
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
The neural dynamics underlying prioritisation of task-relevant information
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
The neural dynamics underlying prioritisation of task-relevant information
Tijl Grootswagers, Amanda K. Robinson, Sophia M. Shatek, Thomas A. Carlson
bioRxiv 2020.06.25.172643; doi: https://doi.org/10.1101/2020.06.25.172643
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
The neural dynamics underlying prioritisation of task-relevant information
Tijl Grootswagers, Amanda K. Robinson, Sophia M. Shatek, Thomas A. Carlson
bioRxiv 2020.06.25.172643; doi: https://doi.org/10.1101/2020.06.25.172643

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (3609)
  • Biochemistry (7585)
  • Bioengineering (5533)
  • Bioinformatics (20825)
  • Biophysics (10344)
  • Cancer Biology (7995)
  • Cell Biology (11653)
  • Clinical Trials (138)
  • Developmental Biology (6617)
  • Ecology (10224)
  • Epidemiology (2065)
  • Evolutionary Biology (13639)
  • Genetics (9557)
  • Genomics (12856)
  • Immunology (7930)
  • Microbiology (19568)
  • Molecular Biology (7675)
  • Neuroscience (42182)
  • Paleontology (308)
  • Pathology (1259)
  • Pharmacology and Toxicology (2208)
  • Physiology (3271)
  • Plant Biology (7058)
  • Scientific Communication and Education (1295)
  • Synthetic Biology (1953)
  • Systems Biology (5433)
  • Zoology (1119)