Abstract
Crossmodal plasticity refers to reorganisation of sensory cortices in the absence of their main sensory input. Understanding this phenomenon provides insights into brain function and its potential for change and enhancement. Using fMRI, we investigated how early deafness and consequent varied language experience influence crossmodal plasticity and the organisation of executive functions in the adult brain of male and female individuals. Results from a range of visual executive function tasks (working memory, switching, planning, inhibition) show that, as a function of the degree of deafness, deaf individuals specifically recruit “auditory” regions during switching. This recruitment correlates with performance, highlighting its functional relevance. We also observed recruitment of auditory temporal regions during planning, but only in deaf individuals with the highest language scores, suggesting differential use of linguistic skills to support executive functions. Our results show executive processing in typically sensory regions, suggesting that the development and ultimate role of brain regions are influenced by perceptual environmental experience.
Introduction
Understanding the impact of deafness on brain organisation reveals the effect that sensory developmental experience has on brain structure and function, and how they are differentially affected by nature and nurture. Previous research has focused on how deafness affects sensory processing and the reorganisation of sensory areas, but less is known about how it modulates higher-order cognitive processes. Language and executive processing are strongly linked (Figueras et al., 2008; Woodard et al., 2016), and the study of cognition in deaf individuals can provide unique insights into the nature of this relationship, given the great heterogeneity in language experience and proficiency in this population. Here we study executive processing in deaf individuals to understand how early sensory and language experiences modulate crossmodal plasticity and the organisation of cognitive networks in the human brain.
What is the role of the deaf auditory cortex in cognition?
Executive functions (EF) refer to a set of cognitive processes responsible for the performance of flexible and goal-directed behaviours which allow individuals to act in a complex and changing environment (Baddeley, 2002; Diamond, 2013; Ridderinkhof et al., 2004). EF have been widely associated with activity in frontoparietal areas (D’Esposito & Grossman, 1996). These regions are thought to be specialised in representing abstract and task-related information (Christophel et al., 2017), maintaining behavioural goals in mind, filtering distractors (Ku et al., 2015), and performing top-down modulation on sensory regions in the service of attentional goals and task performances (Zanto et al., 2011). On the other hand, sensory regions, such as the auditory and visual cortices, are usually considered to preferentially process lower-level perceptual features and to contribute to the storage of representations of these features in working memory (Christophel et al., 2017; Ku et al., 2015; Zanto et al., 2011). However, the study of deafness and blindness suggests that this preference might be at least partially driven by environmental sensory experience, given that reorganisation for cognitive processing has been observed in sensory areas of deaf and blind individuals (Amedi et al., 2003, 2004; Bedny et al., 2011; Buchsbaum et al., 2005; Cardin et al., 2018; Ding et al., 2015). For example, previous studies have shown recruitment for visual working memory in the posterior superior temporal cortex (pSTC) of deaf individuals (Andin et al., 2021; Buchsbaum et al., 2005; Cardin et al., 2018; Ding et al., 2015), suggesting a change in function in this area from auditory to cognitive processing as a consequence of deafness. While crossmodal plasticity usually refers to the adaptation of sensory brain regions to processing information from a different sensory modality (Cardin et al., 2020a; Cardin et al., 2020b; Frasnelli et al., 2011; Heimler et al., 2015; Kral, 2007; Merabet & Pascual-Leone, 2010; Ricciardi et al., 2020), these working memory responses in pSTC seem to suggest that in the absence of early sensory stimulation, a sensory region can change its function as well as the sensory modality to which it responds (Bedny, 2017; Cardin et al., 2020b). In addition, evidence suggests that auditory areas in deaf people are functionally connected to frontal regions involved in working memory, potentially being part of the same cortical network for EF and cognitive control (Cardin et al., 2018; Ding et al., 2015). Together, these findings suggest that the nature of the neural circuitry engaged in EF and cognitive control may be modulated by early sensory experience. Our first aim in this study is to understand the role of the auditory cortex in cognition in deaf individuals and the effect of sensory experience on the reorganisation of cognitive networks. The recruitment of auditory regions during visual working memory in deaf individuals could reflect a role in cognitive control, in line with what is generally found in the frontoparietal network; or it could also reflect involvement in a specific executive subcomponent that allows successful control, updating, manipulation, and storage of information (e.g. attentional shifts, inhibitory control). It could also reflect the storage of relevant sensory features, as has been observed in other sensory regions (Druzgal & D’Esposito, 2001; Feredoes et al., 2011). To dissociate between these hypotheses and understand the role of the deaf auditory cortex in cognition, we measured the recruitment of these regions during a range of tasks tapping into different subcomponents of executive function.
Does language modulate crossmodal plasticity and executive processing?
The study of deafness also allows us to investigate how early language experience impacts EF and brain reorganisation. This is due to the great heterogeneity in language experience and proficiency in this population. Deaf children of deaf parents are usually exposed to sign language from birth, and acquire language following similar developmental milestones to those of hearing children learning a spoken language. However, most deaf children are born to hearing parents (∼95%) (Mitchell & Karchmer, 2004) who usually do not know a sign language. In these cases, the onset of language acquisition varies but is typically delayed, with negative consequences for its development both in the signed and spoken modality (Humphries et al., 2016; Mayberry, 2007, 2010). Thus, in many cases, the different sensory experience of deaf individuals is accompanied by a delay in language acquisition and language deprivation (Humphries et al., 2016). This relationship between auditory and language experience has been a confound in many neuroscience, behavioural, and clinical studies (Lyness et al., 2013), and language deprivation effects have been at times confounded as auditory deprivation effects. However, if language experience is measured or controlled, both in the signed and spoken modality, it can help us understand how language shapes the organisation of EF networks in the adult brain. This is a question that is difficult to study in the brain of hearing individuals because language is usually acquired to a high level of proficiency through environmental exposure. Atypical populations, including those with developmental language disorders, have helped in informing this relationship (Akbar et al., 2013; Marton, 2008), but in these populations, there are usually underlying neurological factors that could also impact cognitive function and neural organisation. In contrast, deaf children are fully able to acquire language within its critical period through the same milestones and to the same level of fluency as hearing children (Emmorey, 2002; Morgan & Woll, 2002), and it is only due to lack of environmental exposure that this is not achieved (Humphries et al., 2012).
Previous behavioural studies in deaf children have shown that language deprivation, and not deafness per se, negatively impacts EF (Botting et al., 2017; Figueras et al., 2008; Marshall et al., 2015). It is also known that late language acquisition can impact the reorganisation of auditory areas and the neural substrates of language processing (Cardin et al., 2013, 2016; Ferjan Ramirez et al., 2014; MacSweeney et al., 2008a; MacSweeney et al., 2008b; Mayberry et al., 2011; Neville et al., 1998), but it is not known what effect it has on neural executive processing. Revealing the effect of language experience on executive processing in the brain will provide unique insights into the nature and mechanisms of this relationship. To achieve this, here we measured signed and spoken language proficiency in a group of deaf adults with varied language backgrounds to study the effect of modality-independent language proficiency on EF processing and crossmodal plasticity.
The present study
Our overarching aim is to investigate whether sensory experience and language proficiency have a modulatory effect on behavioural performance and neural processing during EF tasks. Specifically, we aim to:
understand the role of the auditory cortex of deaf individuals in executive processing,
understand the effect of sensory experience on the function of frontoparietal regions,
investigate how different language experiences in childhood relate to executive processing, and how it manifests in the mature brain.
To achieve these, deaf and hearing individuals took part in an fMRI experiment including visual tasks that tapped into different EF: working memory, planning, switching, and inhibition (Figure 1). To study the effect of language on EF, we recruited a group of deaf participants with different language backgrounds, reflecting the heterogeneity observed in deaf communities, with varied proficiency and age of language acquisition. In this group, we measured grammaticality judgements in signed and spoken language and combined them into a single, modality-independent language proficiency measure which was used as a covariate in the analysis of behavioural and neural responses (see Methods).
Each task had a higher executive demands condition (HEF=Higher Executive Function, purple) and a lower executive demands condition (LEF=Lower Executive Function, peach). See Methods for details of the design.
If the functional reorganisation in the deaf auditory cortex applies to multiple different executive control functions, we would expect all four EF tasks to recruit temporal regions in deaf participants. However, if deaf auditory areas are involved in specific subcomponents of executive processing, these regions will be differentially activated by each of the tasks. If the strength of the neural activity is correlated with performance in the tasks, it will further show that crossmodal plasticity has tangible influences on behaviour (Bottari et al., 2014; Lomber et al., 2010; Pavani & Bottari, 2012).
Furthermore, differences in responses between deaf and hearing individuals in frontoparietal regions typically involved in cognitive control will show that early sensory experience also affects the organization of EF networks.
Finally, we hypothesise that modality-independent language proficiency will predict behavioural performance and neural response in EF tasks in deaf adults.
Materials and Methods
Participants
There were two groups of participants (see summary demographics in Table 1):
29 congenitally or early (before 3 years of age) severely-to-profoundly deaf individuals whose first language is British Sign Language (BSL) and/or English (Table 1-1). We recruited a larger number of deaf participants to reflect the language variability of the deaf population in the UK, as discussed in the “Language assessment” section. Datasets from three deaf participants were excluded from all analyses due to excessive motion in the scanner. One participant was excluded because the pure-tone average (PTA) in their best ear was less than 70dB. There was a total of 25 deaf participants included in the analysis of at least one executive function task (see Table 1-2 for details on exclusion).
20 hearing individuals who are native speakers of English with no knowledge of any sign language.
Deaf and hearing participants were matched on age, gender, nonverbal intelligence, and visuospatial working memory span (p>0.05 for each parameter) (Table 1, Table 1-3).
All participants gave written informed consent. All procedures followed the standards set by the Declaration of Helsinki and were approved by the ethics committee of the School of Psychology at the University of East Anglia (UEA) and the Norfolk and Norwich University Hospital (NNUH) Research and Development department.
Participants were recruited through public events, social media, and participant databases of the UCL (University College London) Deafness, Cognition and Language Research Centre (DCAL) and the UEA School of Psychology. Participants were all right-handed (self-reported), had normal or corrected-to-normal vision, and no history of neurological conditions. All participants were compensated for their time, travel, and accommodation expenses.
General procedure
Participants took part in one behavioural and one scanning session. The sessions took place on the same or different days. The behavioural session included:
Standardised nonverbal IQ and working memory tests: the Block Design subtest of the Wechsler Abbreviated Scale of Intelligence (Wechsler, 1999) (WASI) and the Corsi Block-tapping test (Corsi, 1972) implemented in PEBL software (Mueller & Piper, 2014) (http://pebl.sourceforge.net/).
Language tasks: four tasks were administered to assess language proficiency in English and BSL in deaf participants (see the “Language assessment” section below).
Pre-scanning training: the training session ensured that participants understood the tasks (the rules were explained to them in their preferred language – English or BSL) and reached accuracy of at least 75%.
Audiogram screening: pure-tone averages (PTAs) were used to measure the degree of deafness in deaf participants. Copies of audiograms were provided by the participants from their audiology clinics or were collected at the time of testing using a Resonance R17 screening portable audiometer. Participants included in the study had a mean PTA greater than 75dB averaged across the speech frequency range (0.5, 1, 2kHz) in both ears (mean=93.66±7.79dB; range: 78.33-102.5dB). Four participants did not provide their audiograms but they were all congenitally severely or profoundly deaf and communicated with the researchers using BSL or relying on lipreading.
During the scanning session, fMRI data were acquired while participants performed four visual executive function tasks on working memory, planning, switching, and inhibition (see details below). The order of the tasks was counterbalanced across participants.
Language assessment
One of our aims was to investigate the relationship between language, executive functions, and neural reorganisation in the deaf group, independently of the modality of the preferred language of the individual (signed or spoken). To capture the variability in language proficiency in the British deaf population, we recruited a larger group of deaf participants with different language backgrounds (see Table 1-1 for details) and measured their language proficiency in English and BSL.
To assess the language proficiency of deaf participants, we chose grammaticality judgement tests measuring language skills in English and BSL. The BSL grammaticality judgement task (BSGJT) is described in Cormier et al., 2012, and the English grammaticality judgement task (EGJT) was designed based on examples from Linebarger et al., 1983. The BSGJT and the EGJT use a single method of assessing grammaticality judgements of different syntactic structures in English and BSL. Grammaticality judgement tests have been used in deaf participants before and have proved to be efficient in detecting differences in language proficiency among participants with varying ages of acquisition (Boudreault & Mayberry, 2006; Cormier et al., 2012).
Deaf participants performed both the BSL and English tests if they knew both languages, or only the English tests if they did not know BSL. Hearing participants only performed the English tests (for control purposes).
Experimental design
All tasks were designed so that each had one condition with higher executive demands (Higher Executive Function; HEF) and one with lower demands (Lower Executive Function; LEF) (Figure 1).
Working memory
We used a visuospatial working memory task (Fedorenko et al., 2011, 2013) (Figure 1) contrasted with a perceptual control task. A visual cue (1500ms) indicated which task participants should perform. The cue was followed by a 3×4 grid. Black squares were displayed two at a time at random locations on the grid, three times, for a total of 1000ms. In the HEF condition, participants were asked to memorise the six locations. Then they indicated their cumulative memory for these locations by choosing between two grids in a two-alternative, forced-choice paradigm via a button press. The response grids were displayed until the participant responded or for a maximum of 3750ms. In the control condition (LEF), participants indicated whether a blue square was present in any of the grids, ignoring the configuration of the highlighted squares. Trials were separated by an inter-trial interval (ITI) with duration jittered between 2000-3500ms. Each experimental run had 30 working memory trials and 30 control trials.
Planning
We used a computer version of the classic Tower of London task (Morris et al., 1993; van den Heuvel et al., 2003) (Figure 1). In each trial, two configurations of coloured beads placed on three vertical rods appeared on a grey screen, with the tallest rod containing up to three beads, the middle rod containing up to two beads, and the shortest rod containing up to one bead. In the Tower of London condition (HEF), participants had to determine the minimum number of moves needed to transform the starting configuration into the goal configuration following two rules: 1) only one bead can be moved at a time; 2) a bead cannot be moved when another bead is on top. There were four levels of complexity, depending on the number of moves required (2, 3, 4, and 5). In the control condition (LEF), participants were asked to count the number of yellow and blue beads in both displays. For both conditions, two numbers were displayed at the bottom of the screen: one was the correct response and the other was incorrect by +1 or −1. Participants answered with their left hand when they chose the number on the left side of the screen, and with their right hand when their choice was on the right. The maximum display time for each stimulus was 30 seconds. The duration of the ITI was jittered between 2000-3500ms. There were 30 trials in the Tower of London condition and 30 trials in the control condition.
Switching
In this task, participants had to respond to the shape of geometric objects, i.e., a rectangle and a triangle (Rubinstein et al., 2001; Rushworth et al., 2002) (Figure 1). At the beginning of the run, participants were instructed to press a key with their left hand when they saw a rectangle and with their right hand when they saw a triangle. Each block started with a cue indicating that the task was to either keep the rule they used in the previous block (“stay” trials; LEF) or to switch it (“switch” trials; HEF). In the switch trials, participants had to apply the opposite mapping between the shape and the response hand. Each block included the presentation of the instruction cue (200ms), a fixation cross (500ms), and two to five task trials. During each trial, a geometrical shape (either a blue rectangle or a blue triangle) was shown at the centre of the screen until the participant responded for a max of 1500ms. Visual feedback (500ms) followed the participant’s response. There were 230 trials in 80 blocks of either the LEF (40) or HEF (40) condition. The analysis for the HEF condition only included the first trial of the switch block (see below).
Inhibition
To study inhibitory control, we used Kelly and Milham’s version (A. Kelly & Milham, 2016) of the classic Simon task (https://exhibits.stanford.edu/data/catalog/zs514nn4996). A square appeared on the left or the right side of the fixation cross. The colour of the squares was the relevant aspect of the stimuli, with their position irrelevant for the task. Participants were instructed to respond to the red square with the left hand and the green square with the right hand. In the congruent condition (LEF), the button press response was spatially congruent with the location of the stimuli (e.g. the right-hand response for a square appearing on the right side of the screen) (Figure 1). In the incongruent condition (HEF), the correct answer was in the opposite location in respect to the stimulus. Half of the trials were congruent, and half were incongruent. Each stimulus was displayed for 700ms, with a response window of up to 1500ms. The ITI was 2500ms for most trials, with additional blank intervals of 7.5 seconds (20), 12.5 seconds (2), and 30 seconds (1). Participants completed 1 or 2 runs of this task, each consisting of a maximum of 200 trials.
Statistical analysis of behavioural performance in executive function tasks
Averaged accuracy (%correct) and reaction time (RT) were calculated. From the participants’ RT, we excluded outlier values. In the switching task, the switch cost was calculated as the difference in the percent of errors (%errors) or RT between the first switch trial of a switch block and all stay trials. In the inhibition task, the Simon effect was calculated as the difference in %errors or RT between the incongruent and congruent trials.
Differences between groups on accuracy or RT were explored with repeated-measures ANOVAs with between-subjects factor group (hearing, deaf) and within-subjects factor condition (LEF, HEF). The differences between the switch costs and Simon effects were tested with ANCOVAs with the switch cost or the Simon effect as a dependent variable and a fixed factor group (hearing, deaf).
To explore the effect of language proficiency on behavioural performance in the deaf group, language z-scores were included as covariates in separate ANCOVAs.
Image acquisition
Images were acquired at the Norfolk and Norwich University Hospital (NNUH) in Norwich, UK, using a 3 Tesla wide bore GE 750W MRI scanner and a 64-channel head coil. Communication with the deaf participants occurred in BSL through a close-circuit camera, or through written English through the screen. An intercom was used for communication with hearing participants. All volunteers were given ear protectors. Stimuli were presented with PsychoPy software (Peirce, 2007) (https://psychopy.org) through a laptop (MacBook Pro, Retina, 15-inch, Mid 2015). All stimuli were projected by an AVOTEC’s Silent Vision projector (https://www.avotecinc.com/high-resolution-projector) onto a screen located at the back of the magnet’s bore. Participants watched the screen through a mirror mounted on the head coil. Button responses were recorded via fORP (Fiber Optic Response Pads) button boxes (https://www.crsltd.com/tools-for-functional-imaging/mr-safe-response-devices/forp/). Functional imaging data were acquired using a gradient-recalled echo (GRE) EPI sequence (50 slices, TR=3,000ms, TE=50ms, FOV=192×192mm, 2mm slice thickness, distance factor 50%) with an in-plane resolution of 3×3mm. The protocol included six functional scans: 1 resting state scan (reported in a different manuscript) and five task-based fMRI scans (working memory: 11 minutes, 220 volumes; planning: 11.5 minutes, 230 volumes; switching: 10.5 minutes, 210 volumes; inhibition: two runs of 10 minutes, 200 volumes each). Some participants did not complete all functional scans (Table 1-2). An anatomical T1-weighted scan (IR-FSPGR, TI=400ms, 1mm slice thickness) with an in-plane resolution of 1×1mm was acquired during the session.
Raw B0 field map data were acquired using a 2D multi-echo GRE sequence with the following parameters: TR=700ms, TE=4.4 and 6.9ms, flip angle=50°, matrix size=128×128, FOV=240mm×240mm, number of slices=59, thickness=2.5mm, and gap=2.5mm. Real and imaginary images were reconstructed for each TE to permit calculation of B0 field maps in Hz (Fessler et al., 2005; Funai et al., 2008; Jezzard & Balaban, 1995).
fMRI preprocessing
fMRI data were analysed with MATLAB 2018a (MathWorks, MA, USA) and Statistical Parametric Mapping software (Penny et al., 2011) (SPM12; Wellcome Trust Centre for Neuroimaging, London, UK). The anatomical scans were segmented into different tissue classes: grey matter, white matter, and cerebrospinal fluid. Skull-stripped anatomical images were created by combining the segmented images using the Image Calculation function in SPM (ImCalc, http://tools.robjellis.net). The expression used was: [(i1.*(i2+i3+i4))>threshold], where i1 was the bias-corrected anatomical scan and i2, i3 and i4 were the tissue images (grey matter, white matter, and cerebrospinal fluid, respectively). The threshold was adjusted between 0.5 and 0.9 to achieve adequate brain extraction for each participant. Each participant’s skull-stripped image was normalised to the standard MNI space (Montreal Neurological Institute) and the deformation field obtained during this step was used for normalisation of the functional scans.
Susceptibility distortions in the EPI images were estimated using a field map that was co-registered to the BOLD reference (Fessler et al., 2005; Funai et al., 2008). Images were realigned using the pre-calculated phase map, co-registered, slice-time corrected, normalised, and smoothed (using an 8mm FWHM Gaussian kernel). All functional scans were checked for motion and artefacts using the ART toolbox (https://www.nitrc.org/projects/artifact_detect).
fMRI first-level analysis
The first-level analysis was conducted by fitting a general linear model (GLM) with regressors of interest for each task (see details below). All the events were modelled as a boxcar and convolved with SPM’s canonical hemodynamic response function. The motion parameters, derived from the realignment of the images, were added as regressors of no interest. Regressors were entered into a multiple regression analysis to generate parameter estimates for each regressor at every voxel.
Switching
The first trial of each switch block (HEF) and all stay trials (LEF) were modelled as regressors of interest separately for the left- and right-hand responses. The cues and the remaining switch trials were included as regressors of no interest.
Working memory
The conditions of interest were working memory (HEF) and control (LEF). The onset was set at the presentation of the first grid, with the duration set at 3.5 seconds (i.e., the duration of the three grids plus a 500ms blank screen before the appearance of the response screen; Figure 1). Button responses were included separately for each hand and condition as regressors of no interest.
Planning
Tower of London (HEF) and control (LEF) conditions were included in the model as regressors of interest, with onsets at the beginning of each trial and duration set to the trial-specific RT. Button responses were modelled separately for each hand as regressors of no interest.
Inhibition
Four regressors of interest were obtained by combining the visual hemifield where the stimulus appeared with the response hand (1. right visual hemifield—left hand; 2. left visual hemifield—right hand; 3. right visual hemifield—right hand; 4. left visual hemifield—left hand). Right visual hemifield—left hand and left visual hemifield—right hand were the incongruent conditions (HEF), whereas the right visual hemifield-right hand and left visual hemifield-left hand were the congruent conditions (LEF).
Whole-brain second-level analysis
Beta values for each regressor of interest in each task were taken into separate second-level repeated-measures ANOVAs as described in the results. Significantly active voxels at p<0.05 FWE-corrected peak- or cluster-level (peak p<0.001) are reported in the results section as x, y, and z coordinates in standard MNI space.
Region of interest analysis
We conducted a region of interest (ROI) analysis to investigate differences between groups in executive processing and their relationship to behavioural variables. Identifying main effects and interactions between groups, conditions, and behaviour, across all voxels in the brain, requires a large number of comparisons. In order to acquire enough data to conduct this type of whole-brain analysis for four different tasks, we would need very long or multiple scanning sessions — this is not feasible when testing special populations, where many participants have to travel from different parts of the country. For that reason, we limited our statistical inferences to a predefined set of temporal auditory regions and frontoparietal regions.
Temporal ROIs definition
Three regions were included in this analysis: Heschl’s gyrus (HG), the planum temporale (PT), and the posterior superior temporal cortex (pSTC) (Figure 2A). HG and the PT were defined anatomically, using FreeSurfer software (Fischl, 2012) (https://surger.nmr.mgh.harvard.edu). Full descriptions of these procedures can be found elsewhere (Dale et al., 1999; Fischl et al., 2002), but in short, each participant’s bias-corrected anatomical scan was parcellated and segmented, and voxels with the HG label and the PT label were exported using SPM’s ImCalc function (http://robjellis.net/tools/imcalc_documentation.pdf). Participant-specific ROIs were then normalised to the standard MNI space using the deformation field from the normalisation step of the preprocessing.
A. Temporal regions included in the analysis: Heschl’s gyrus (HG), the planum temporale (PT), and the superior temporal cortex (pSTC). HG and PT were defined anatomically, in a subject-specific manner, using the FreeSurfer software package (Fischl, 2012). The figure shows the overlap of all subject-specific ROIs. Common voxels between left PT and left pSTC have been subtracted from left PT (see Methods). pSTC was defined functionally, based on the findings of Cardin et al., 2018 (see Methods). B. fMRI group effects in temporal ROIs. ***p<0.001; *p<0.05.
pSTC was specified following findings from Cardin et al., 2018, where a visual working memory crossmodal plasticity effect was found in right and left pSTC in deaf individuals [left: −59 −37 10; right: 56 −28 −1]. Right and left functional pSTC ROIs were defined using data from Cardin et al., 2018, with the contrast [deaf (working memory > control task) > hearing (working memory > control task)] (p<0.005, uncorrected).
There was an average partial overlap of 8.2 voxels (SD=6.86) between left PT and left pSTC, with no significant difference in overlap between groups (deaf: mean=9.92, SD=7.02; hearing: mean=6.05, SD=6.17). To ensure that the two ROIs were independent, common voxels were removed from left PT in a subject-specific manner. Removing the overlapping voxels did not qualitatively change the results.
Frontoparietal ROIs definition
Frontoparietal ROIs were defined by extracting uniformity clusters from a meta-analysis map of 128 studies associated with the keyword “executive function” using neurosynth.org (Yarkoni et al., 2011). From the uniformity clusters, we created spherical, symmetrical, and bilateral ROIs using MarsBaR (Brett et al., 2002) (MARSeille Boîte À Région d’Intérêt, http://marsbar.sourceforge.net). The anatomical labels of the ROIs, the MNI coordinates, and their radius are shown in Figure 3.
Numbers in brackets indicated the central coordinates of the ROI: dorsolateral prefrontal cortex (DLPFC) [left: −46 6 32; right: 46 8 32; radius 10mm]; frontal eye fields (FEF) [left: −29 0 57, right: 29 0 57; radius 10mm]; pre-supplementary motor area (pre-SMA) [left: −6 16 46, right: 6 16 46; radius 8mm]; the insula [left: −34 21 0, right: 36 19 −2; radius 7mm]; superior parietal lobule (SPL) [left: −34 −55 46, right: 34 −55 46; radius 10mm].
Areas of interest were dorsolateral prefrontal cortex (DLPFC), frontal eye fields (FEF), pre-supplementary motor area (pre-SMA), the insula, and the superior parietal lobule (SPL). We set a 10-mm radius for the DLPFC, FEF, and SPL, an 8-mm radius for the insula, and a 7-mm radius for the pre-SMA to exclude voxels in neighbouring gyri.
Post-hoc ROI analysis of Te 1.0
Area Te 1.0 of HG was defined using the cytoarchitectonic maps generated by Tahmasebi et al., 2009, based on those produced by Morosan et al., 2001. Subject-specific cytoarchitectonic ROIs were specified by combining, separately for each hemisphere, voxels that were present both in the participant-specific FreeSurfer HG ROI and in the Te 1.0 map from Tahmasebi et al., 2009.
Statistical analysis
Parameter estimates were extracted from each ROI using MarsBaR 0.44 (http://marsbar.sourceforge.net) (Brett et al., 2002). All the statistical analyses presented in the results section were conducted using JASP (JASP Team, 2020) (https://jasp-stats.org). The data were entered into separate repeated-mixed measures ANOVAs for each task and set of ROIs. Factors in the ANOVAs on the temporal ROIs included: the between-subjects factor group (hearing, deaf) and the within-subjects factors ROI (HG, PT, pSTC), hemisphere (left, right), and condition (LEF, HEF). For the language analysis (deaf group only), we conducted separate repeated-measures ANOVAs for each task with factors ROI (HG, PT, pSTC), hemisphere (left, right), and condition (LEF, HEF), and used language z-score as a covariate. The ANOVAs conducted on the frontoparietal ROIs had the following setup: between-subjects factor group (hearing, deaf) and within-subjects factors ROI (DLPFC, FEF, pre-SMA, insula, SPL), hemisphere (left, right), and condition (LEF, HEF).
We have investigated language effects in the temporal and frontoparietal regions in the switching and planning tasks in the deaf group, where we found significant effects of language on behavioural performance. The ANCOVAs investigating the effects of language in the switching and planning tasks had ROI (HG, PT, pSTC or DLPFC, FEF, pre-SMA, insula, SPL), hemisphere (left, right), and condition (LEF, HEF) within-subjects factors, and language z-score as a covariate.
In the switching task, the neural switch cost was calculated by subtracting the average neural activity in all stay trials (BOLDstay) from the average activity in the first switch trials (BOLDswitch). This was then used to calculate correlation coefficients with relevant behavioural variables.
The Greenhouse-Geisser correction was applied when the assumption of sphericity was violated. Significant interactions and effects of interest were explored by conducting the Student’s t-tests or calculating Pearson’s correlation coefficients. Mann-Witney U-tests were used when the equal variance assumption was violated.
Results
Behavioural results
Group differences
Deaf (N=25) and hearing (N=20) individuals were scanned while performing four executive function tasks: working memory, planning, switching, and inhibition (Figure 1). Behavioural results from all tasks are shown in Figure 4. To explore differences in performance between groups, we conducted 2×2 repeated-measures ANOVAs for each task, with either accuracy or reaction time (RT) as the dependent variable, between-subjects factor group (hearing, deaf), and within-subjects factor condition (HEF, LEF). Results show a significant main effect of condition for both accuracy (working memory: F1,41=91.52, p<0.001; switching: F1,41=30.51, p<0.001; planning: F1,38=46.07, p<0.001; inhibition: F1,35=19.15, p<0.001) and RT (working memory: F1,41=197.55, p<0.001; switching: F1,41=27.53, p<0.001; planning: F1,38=240.29, p<0.001; inhibition: F1,35=102.28, p<0.001) in all tasks (Table 2).
The figure shows average accuracy (%correct) and reaction time (seconds) for each task and condition in the hearing and the deaf groups. It also shows the average switch costs and Simon effects for both accuracy and reaction time in each group. The accuracy switch cost and Simon effect are calculated and plotted using %error instead of %correct so that larger values indicate an increase in cost. Only the first trials of the switch blocks were included in the HEF condition. The bold lines in the box plots indicate the median. The lower and upper hinges correspond to the first and third quartiles. Differences between conditions were statistically significant (p<0.05) for all tasks in both groups (not shown). **p<0.01; *p<0.05.
The group of deaf individuals had significantly slower RT in all tasks (Table 2) (working memory: F1,41=4.97, p=0.03; switching: F1,41=5.14, p=0.03; planning: F1,38=8.57, p=0.006; inhibition: F1,35=9.45, p=0.004). In the inhibition task, there was also a significant condition × group interaction (F1,35=9.54, p=0.004). A post-hoc t-test revealed that the deaf group was significantly slower in the congruent condition (t35=2.72, p=0.01). This condition × group interaction in RT is also reflected in a significant difference between groups in the Simon effect (RTincongruent–RTcongruent) (t35=-2.48, p=0.02; Figure 4D; Table 2-1), which was smaller in the deaf group (mean=19ms; SD=17ms) than in the hearing group (mean=33ms; SD=15ms).
Switching was the only task where there was a significant main effect of group on accuracy (F1,41=5.16, p=0.03) and a condition × group interaction (F1,41=5.75, p=0.02). A post-hoc t-test revealed that the deaf group was significantly less accurate in the switch condition (t41=-3.13, p=0.02). The difference in the accuracy switch cost (%errorsswitch–%errorsstay) confirms this pattern, with the deaf group (mean=10.60; SD=9.68) having a larger accuracy switch cost than the hearing group (mean=4.18; SD=7.53; t41=2.40, p=0.02; Figure 4B; Table 2-1).
Effect of language on behavioural performance in the deaf group
To test whether the variability in behavioural performance in the deaf group can be explained by their unique language experience, we investigated the relationship between performance in the executive function tasks and language proficiency. We recruited deaf participants with different language backgrounds, i.e., with varied proficiency and age of acquisition, to reflect the heterogeneity observed in deaf communities. This variability is not typically found in the hearing population in the absence of underlying neurological conditions or extreme social isolation, thus this analysis was conducted only in the group of deaf individuals. We combined results from English and BSL grammaticality judgement tasks (EGJT and BSLGJT) to create a single, modality-independent measure of language proficiency in the deaf group (see Methods). Accuracy scores in the EGJT (%correct; mean=83.48, SD=11.41, N=25) and BSLGJT (mean=78.45, SD=13.30, N=20) were transformed into z-scores separately for each test (Figure 5A). For each participant, the EGJT and BSLGJT z-scores were then compared, and the higher one was chosen for a combined modality-independent language proficiency score (Figure 5B).
A. Language z-scores in the English grammaticality judgement task (EGJT) and BSL grammaticality judgement task (BSLGJT), with participants sorted on the x-axis by their EGJT rank. Black circles indicate the z-score chosen for the combined modality-independent language score. B. Modality-independent language z-scores. Data from two participants whose performance clearly deviated from that of the group (>2 SD from the mean) were removed from the analysis that included language as a covariate. C. Correlations between accuracy (%correct and switch cost) and language z-scores in the switching task. D. Correlations between RT and language z-scores in the planning task.
For each task, we performed ANOVAs with condition (LEF, HEF) as a within-subjects factor and language z-score as a covariate. The analysis revealed a significant condition x language z-score interaction in the switching task (F1,19=4.82, p=0.04) (Table 2-2A). A post-hoc analysis showed that there was a significant positive correlation between language z-scores and accuracy in the switch condition (r=0.45, p=0.04) but not in the stay condition (p>0.05) (Figure 5C). This correlation suggests that participants with higher language proficiency scores were more accurate in the switching condition, which is also reflected in a significant correlation between language z-scores and the accuracy switch cost (r=0.45, p=0.04) (Table 2-2B, Figure 5C).
In the planning task, we found a significant condition x language z-score interaction (F1,17=7.23, p=0.02) for RT (Table 2-2A). A post-hoc analysis showed a significant negative correlation between RTs in the control condition and language z-scores (r=-0.56, p=0.01) (Figure 5D).
There were no significant main effects or interactions with language proficiency in the working memory and inhibition tasks (Table 2-2A).
Neuroimaging results
All executive function tasks activated typical frontoparietal regions in both groups of participants (Table 3, Figure 6). There were significantly stronger activations in frontoparietal areas in the HEF condition of the working memory, planning, and switching tasks. These included commonly found activations in frontoparietal areas, such as dorsolateral prefrontal cortex (DLPFC), frontal eye fields (FEF), pre-supplementary motor area (pre-SMA), and intraparietal sulcus (IPS). In the inhibition task, the HEF incongruent condition resulted in stronger activation in IPS and left FEF, but there were no significant differences between conditions (Figure 6D).
The figure shows activations for each EF task and condition averaged across groups. Contrasts calculated across both groups. All contrasts are displayed at p<0.05 (FWE-corrected). Colour bars represent z-scores.
To investigate differences between groups in executive processing and their relationship with behavioural performance and language, we conducted a region of interest (ROI) analysis on temporal auditory and frontoparietal regions. Temporal ROIs included: Heschl’s gyrus (HG), the planum temporale (PT), and the posterior superior temporal cortex (pSTC) (Figure 2A); group averages from these ROIs can be found in Figure 2. Differences and interactions between groups are discussed below separately for each task, starting with switching, where we observed the strongest activation of temporal ROIs in the deaf group (Figure 2B). Language effects are only explored in tasks where we found significant effects of language on performance. Whole-brain effects of condition are also shown to aid the description of significant effects.
Switching
Temporal ROIs
The analysis of temporal ROIs showed increased activations during the switching task in the deaf group (Figure 2B, 7A). A repeated-measures ANOVA with between-subjects factor group (hearing, deaf) and within-subjects factors condition (switch, stay), ROI (HG, PT, pSTC), and hemisphere (left, right), revealed the following significant results:
main effect of group (F1,41=15.48, p<0.001), due to significantly higher activations in temporal regions in the deaf group (HG: t41=2.99, p=0.005; PT: t41=3.95, p<0.001; pSTC: t41=3.65, p<.001) (Figure 2B);
group x condition interaction (F1,41=4.75, p=0.03), due to higher activations in the deaf group during the switching condition (HG: t22=2.57, p=0.02; PT: t22=4.70, p<0.001; pSTC: t22=4.51, p<0.001) and no significant differences between conditions in the hearing group (all p>0.5) (Figure 7A);
group x ROI interaction (F1.93,79=3.42, p=0.04), indicating a significant difference in the activation observed between the deaf and hearing group across ROIs (PTdeaf-hearing=3.28 > pSTCdeaf-hearing=2.78 > HGdeaf-hearing=1.98) (Figure 7A).
A. Neural activity in temporal ROIs. ***p<0.005; ****p<0.001. B. Correlations between RT switch cost and neural switch cost in temporal ROIs. Correlation coefficients are colour-coded (green: negative; purple: positive. See the colour bar). Significant correlation coefficients are shown in bold. *p<0.05; **p<0.01. HG=Heschl’s gyrus, PT=planum temporale, pSTC=posterior superior temporal cortex. C. Correlations between PTA (pure-tone average) and neural switch cost in temporal ROIs in the deaf group, and between PTA and RT switch cost (bottom panel). D. Whole-brain analysis: deaf [switch > stay] and hearing [switch > stay]. Contrasts displayed at p<0.001 for visualisation purposes but all peaks significant at p<0.05 (FWE-corrected). Colour bars represent z-scores.
To investigate whether the interaction between group and condition was reflected in differences in behavioural performance, we conducted repeated-measures ANOVAs with neural switch cost (BOLDswitch–BOLDstay) as the dependent variable, and RT or accuracy switch cost as covariates. ROI (HG, PT, pSTC) and hemisphere (left, right) were defined as within-subject factors, and group (hearing, deaf) as a between-subjects factor.
There were significant interactions between RT switch cost and: 1) group (F1,39= 8.00, p=0.007); 2) ROI, hemisphere and group (F1.99,77.61=4.59, p=0.01). To investigate these, correlations coefficients between the behavioural RT switch cost and the neural switch cost were calculated for each ROI and group (Figure 7B). In the deaf group, these revealed a positive correlation between behavioural switch cost and neural switch cost in left HG (r=0.58, p=0.007), right pSTC (r=0.47, p=0.02), and right PT (r=0.53, p=0.009), with an overall positive correlation trend in all other ROIs (positive values in purple in Figure 7B; see also Figure 7-1A). The opposite overall trend was found in the hearing group, with significant negative correlations in left pSTC (r=-0.53, p=0.02) and right HG (r=-0.56, p=0.01) (negative values in green in Figure 7B and Figure 7-1A).
Accuracy switch cost significantly interacted with group (F1,39=7.81, p=0.008). Post-hoc correlations revealed that this interaction was driven by negative correlations between accuracy switch cost and neural switch cost in the hearing group (left PT: r=-0.45, p=0.05; left pSTC: r=-0.53, p=0.02; Figure 7-1B). No significant correlations were found in the deaf group.
The results observed in HG suggest that plastic changes can occur also in primary auditory areas. However, HG contains at least three distinct cytoarchitectonic areas: Te1.0, Te1.1, and Te1.2. Based on its granularity (Hackett, 2011; Morosan et al., 2001) and its anatomical position (Dick et al., 2012), Te1.0 is the region that is more likely to contain the primary auditory cortex. In agreement with results from HG, analysis of this region also showed a significant correlation between RT switch cost and neural switch cost in the left Te1.0 in the deaf group (r=0.42, p=0.04), and a negative correlation in the right Te1.0 of the hearing group (r=-0.55, p=0.01).
Effect of Degree of Deafness
To investigate whether the degree of deafness contributed to the extent of neural plasticity observed in temporal regions, we conducted a repeated-measures ANOVA on the neural switch cost in the deaf group using hearing threshold (pure-tone average; PTA) as a covariate. The factors included were ROI (HG, PT, pSTC) and hemisphere (left, right). The analysis revealed a significant main effect of PTA (F1,17=5.79, p=0.003) and a significant ROI x PTA interaction (F2,34=3.85, p=0.03). To explore these effects further, we calculated correlations between neural switch costs in each ROI and hemisphere and PTA. Correlations were significant in the left HG (r=0.64, p=0.003), right HG (r=0.54, p=0.02), and in the right pSTC (r=0.49, p=0.03) (Figure 7C).
Language
The behavioural analysis showed a relationship between language proficiency and accuracy in the switching task in the group of deaf individuals (Figure 5C). To understand whether language proficiency was also related to the level of recruitment of temporal regions during switching, we looked at the effect of language z-score as a covariate in our analysis. No significant effect of language was found.
Whole-brain
The results of the ROI analysis were also apparent in the whole-brain, where there are different profiles of activity for the contrast [switch > stay] in the deaf and hearing groups (Table 3-1A, Figure 7D). These differences included activations for the deaf group along the right and left pSTC, which were absent in the hearing group. Group comparison revealed a significant main effect of group (p<0.05, FWE-corrected), with higher activations for the deaf group in the right calcarine sulcus, bilateral pSTC, and the bilateral middle precentral gyrus (Table 3-1B).
Frontoparietal ROIs
There was a significant main effect of group (F1,41=4.39, p=0.05) driven by higher activations in the deaf group during the switching task (Figure 8). However, in contrast to the results found in temporal ROIs, there was no significant interaction between condition and group, and no significant correlation between behavioural and neural switch cost. There were no significant effects of language in the frontoparietal ROIs in the switching task.
Neural activity in frontoparietal regions. DLPFC=dorsolateral prefrontal cortex, FEF=frontal eye fields, pre-SMA=pre-supplementary motor area, SPL=superior parietal lobule.
Working memory
Temporal ROIs
In temporal ROIs, a repeated-measures ANOVA with within-subjects factor group (hearing, deaf) and fixed factors condition (working memory, control), ROI (HG, PT, pSTC), and hemisphere (left, right) revealed a significant condition × group interaction (F1,41=6.41, p=0.01) in the working memory task. This effect was due to different trends of activity across groups and conditions. Specifically, the deaf group showed increased activity during the working memory condition, whereas the opposite trend was found in the hearing group (Figure 9A). Differences between conditions within each group were not significant (hearing: t18=1.74, p=0.10; deaf: t23=1.81, p=0.08). No significant main effect of group was found.
A. Condition x group interaction. B. Neural activity in temporal regions. Ctr=control, WM=working memory. HG=Heschl’s gyrus, PT=planum temporale, pSTC=posterior superior temporal cortex.
The lack of significant WM effects in the deaf group in temporal regions is potentially at odds with previous findings showing recruitment of pSTC regions for visual working memory in deaf individuals (Buchsbaum et al., 2005; Cardin et al., 2018; Ding et al., 2015). To investigate this discrepancy, we conducted exploratory t-tests separately for each ROI and each group (Figure 9B). These revealed increased activations in the deaf group during the WM condition only in the right PT (t23=3.04, p=0.006; Figure 9B), and not in any of the other temporal ROIs.
Frontoparietal ROIs
The analysis of activity in frontoparietal regions showed a significant main effect of condition (Figure 10), but no significant main effect or interaction with group.
Ctr=control, WM=working memory. DLPFC=dorsolateral prefrontal cortex, FEF=frontal eye fields, pre-SMA=pre-supplementary motor area, SPL=superior parietal lobule.
Planning
Temporal ROIs
Analysis of temporal ROIs showed a significant main effect of group (F1,38=5.85, p=0.02) (Figure 2B, Figure 11A) in the planning task. This was driven by significant deactivations in the hearing group (t18=-4.47, p<0.001) (Figure 2B, 11A), with no significant difference in activity from baseline in the deaf group (t20=-1.31, p=0.21). No significant condition × group interaction was found.
A. Neural activity in temporal regions in both groups. B. Correlations between language and neural response in the deaf group. Ctr=control, ToL=Tower of London. HG=Heschl’s gyrus, PT=planum temporale, pSTC=posterior superior temporal cortex.
Language
To investigate the effect of language on the neural activity in the temporal ROIs during the planning task in the deaf group, we conducted a 3×2×2 repeated-measures ANOVA with condition (control, Tower of London), ROI (PT, HG, pSTC), and hemisphere (left, right) as factors, and language z-score as a covariate. This analysis revealed a significant ROI x language z-score interaction (F2,34=8.01, p=0.001). To explore this interaction, we calculated correlation coefficients between language z-score and the neural activity for each combination of hemisphere and condition for each ROI (Figure 11B). There was a significant correlation between language z-score and neural activity in left PT in both the Tower of London condition (r=0.56, p=0.01) and the control condition (r=0.48, p=0.04), and between language z-score and neural activity in the right pSTC in the Tower of London condition (r=0.46, p=0.05) (Figure 11B). Correlations with neural activity in HG were not significant (Figure 11-1).
Frontoparietal ROIs
The analysis of activity in frontoparietal regions showed a significant main effect of condition (Figure 12), but no significant main effect or interaction with group. There was no significant main effect of language in the analysis of the frontoparietal regions in this task.
Ctr=control, ToL=Tower of London. DLPFC=dorsolateral prefrontal cortex, FEF=frontal eye fields, pre-SMA=pre-supplementary motor area, SPL=superior parietal lobule.
Inhibition
Temporal ROIs
There was a significant interaction between ROI and group (F1.89,66.05=3.92, p=0.03; Figure 13). There were no significant differences between groups in any ROI. Instead, the ROI x group interaction was driven by a main effect of ROI in the deaf group (higher activations for PT and pSTC than HG; Figure 2B), which was not present in the hearing group.
The neural activity in the temporal regions. Con=congruent, Inc=incongruent. HG=Heschl’s gyrus, PT=planum temporale, pSTC=posterior superior temporal cortex.
Frontoparietal ROIs
We found a significant interaction between condition, hemisphere, and group in the inhibition task (F1,35=5.91, p=0.02). Post-hoc t-tests showed that this interaction was not due to differences between groups, but rather it was driven by higher activations in the left hemisphere in the deaf group during the congruent condition (t21=2.32, p=0.03) (Figure 14).
Con=congruent, Inc=incongruent. DLPFC=dorsolateral prefrontal cortex, FEF=frontal eye fields, pre-SMA=pre-supplementary motor area, SPL=superior parietal lobule.
Discussion
Here we investigated how early sensory and language experience impact the organisation of executive processing in the brain. We found that as a consequence of deafness, primary and secondary auditory areas are recruited during switching. Behavioural performance in this task correlated with activity in auditory areas and was modulated by language proficiency. Recruitment of auditory areas during switching correlated with the degree of deafness, more significantly in Heschl’s gyrus, which contains the primary auditory cortex. These results suggest that early absence of auditory inputs results in a functional shift in regions typically involved in auditory processing — in the absence of auditory inputs, these regions adopt a role in specific components of executive processing with measurable consequences on the individual’s behaviour. Recruitment of auditory regions was not observed in all EF tasks, indicating the absence of a common role in cognitive control in the deaf population. In the planning task, deaf individuals with the highest language scores also recruited secondary auditory regions. This suggests differences in the use of language to aid EF depending on early language experience and later proficiency, highlighting that superior temporal cortices have shared or overlapping roles in language and executive processing in deaf individuals (Cardin et al., 2020b).
Overall, we show executive processing in temporal regions typically considered to be auditory processing regions, suggesting that the involvement of regions in the adult brain for sensory or cognitive processing can be influenced by perceptual experience.
The auditory cortex of deaf individuals is recruited during task switching
To study the effects of early deafness on cortical reorganisation and executive processing, we mapped neural activity in a range of EF tasks: switching, working memory, planning, and inhibition. This design allowed us to thoroughly examine the role of auditory regions in components of executive function that are shared or unique across tasks. The HEF condition in all tasks recruited frontoparietal areas typically involved in EF and cognitive control. However, only switching resulted in significant activations in temporal auditory regions in the deaf group. This funding demonstrates that the deaf auditory cortex serves only a specific subcomponent of executive functioning during switching. If there were a general role in cognitive control for these brain regions, similar activations would have been seen across all tasks.
Switching was also the only task where we found differences in accuracy between groups, where on average, performance in the group of deaf individuals was significantly lower. Accuracy in the switching task in the deaf group was linked to language proficiency, highlighting that poorer performance is not a consequence of crossmodal plasticity or deafness per se, but instead related to early language deprivation and consequent language delay (see below).
During the LEF and HEF conditions of the switching task, the deaf group activated temporal and frontoparietal regions more strongly than the hearing group. However, only in temporal areas did we find an interaction between group and condition and a correlation with behavioural performance. In the deaf group, the neural switch cost (BOLDswitch–BOLDstay), correlated positively with the behavioural RT switch cost in left HG, right pSTC, and right PT. This direct relationship between behavioural outcomes and activity in reorganised cortical areas provides robust evidence of the functional and behavioural importance of the observed crossmodal plasticity. This relationship between higher neural activity and poorer behavioural performance indicates effortful processing, as has been previously observed in other cognitive tasks with different levels of complexity (Cazalis et al., 2003; Just et al., 1996).
Switching requires cognitive flexibility and shifting between different sets of rules (Gurd et al., 2002; Rushworth et al., 2002). Shifting is considered one of the core components of executive control. It is defined as the ability to flexibly shift “back and forth between multiple tasks, operations, or mental sets” (Miyake et al., 2000). Shifting is also important in working memory tasks (2-back WM, visuospatial delayed recognition) that resulted in the recruitment of posterior superior temporal regions in deaf individuals. In the present study, the working memory task did not significantly recruit pSTC; we only observed moderate recruitment of the right PT, the magnitude of which was significantly smaller than that of the switching task in previous WM studies (Cardin et al., 2018; Ding et al., 2015). The WM task we used in this study requires updating of information and incremental storage, but no shifting between targets or internal representations of stimuli. Together, these results suggest that previous WM effects in superior temporal regions are not necessarily linked to storage, updating or control, but are more likely linked to shifting between tasks or mental states.
A possible physiological mechanism supporting this change of function in the auditory cortex can be provided through its anatomical proximity to the parietal lobe, in particular the temporoparietal junction (TPJ), and to other middle and posterior temporal regions (Cardin et al., 2020b; Shiell et al., 2016). Right TPJ is a multisensory associative region involved in reorientation of attention to task-relevant information, such as contextual cues or target stimuli (Corbetta & Shulman, 2002; Geng & Mangun, 2011; Geng & Vossel, 2013). The right posterior temporal cortex also seems to have a role in attention in both deaf and hearing individuals (Seymour et al., 2017). Furthermore, portions of the middle temporal gyrus have been shown to be involved in task switching (Lemire-Rodger et al., 2019) and to encode task-set representations (Qiao et al., 2017). The anatomical location and the functional role of TPJ and other middle and posterior temporal regions suggest that, in the absence of auditory inputs throughout development, the computations performed by other temporo-parietal regions could be extended to adjacent auditory cortices (Cardin et al., 2020b; Shiell et al., 2016). However, the functional profile of these temporo-parietal areas, and in particular the link to behaviour, is the opposite to what we observed here — stronger optical imaging activations in the posterior temporal cortex were linked to faster reaction times in an attention task (Seymour et al., 2017) and higher activity in TPJ was associated with fewer errors during switching (Larson & Lee, 2013, 2014). In “auditory” temporal areas of the deaf brain, we observe the opposite pattern — higher activations in the switching condition are linked to slower responses and a larger switch cost. Thus, it is likely that the role of deaf “auditory” regions is different from that of adjacent temporo-parietal cortices. In “auditory” temporal areas of the deaf brain, we observe the opposite pattern — higher activations in the switching condition are linked to slower responses and a larger switch cost. Thus, it is likely that the role of deaf “auditory” regions is different from that of adjacent temporo-parietal cortices.
Another possibility is that the recruitment of “auditory” temporal regions for switching observed in deaf adults reflects vestigial functional organisation present in early stages of development. Research on hearing children has found activations in bilateral occipital and superior temporal cortices during task switching (Engelhardt et al., 2019), with a similar anatomical distribution to the one we find here. Our findings in deaf individuals suggest that executive processing in temporal cortices could be “displaced” by persistent auditory inputs which, as the individual develops, may require more refined processing or demanding computations. Thus, an alternative view is that regions considered to be “sensory” have mixed functions in infants and become more specialised in adults, following different developmental pathways influenced by environmental sensory experience: the temporal regions of hearing individuals become progressively more specialised for sound processing, whereas, in deaf adults, they are more specialised for subcomponents of executive processing.
Several studies of crossmodal plasticity propose a preservation of function in auditory areas, where these regions maintain their original computation but adapt to respond to a different sensory input (Benetti et al., 2017, 2021; Cardin et al., 2013; Lomber et al., 2010). Other studies have suggested that sensory-deprived auditory regions are involved in higher-order cognitive functions, suggesting a functional change (Cardin et al., 2020b). Taking into account different mechanisms that can support all these findings (Cardin et al., 2020), as well as considering different developmental trajectories can contribute to more dynamic accounts of plasticity that depart from the dichotomy of preservation or change of function. This could include considering our choice of frame of reference, as “change” or “preservation” is usually defined with the developed neurotypical adult brain as the normative or reference comparison point. With the adult brain of hearing individuals as reference, our findings from the adult deaf brain can be seen as a change or shift towards cognitive processing, but perhaps there is preservation of function if we compare them to the early function of those regions in the developing brain.
Differences in reorganisation in primary and secondary auditory cortices
Our results show activations in all tested superior temporal areas during the switching task. This included Heschl’s gyrus (HG), and specifically the Area Te1.0, which likely contains the human primary auditory cortex (PAC) (Dick et al., 2012; Hackett, 2011; Morosan et al., 2001). While crossmodal plasticity has been consistently found in higher-order auditory areas,), results from the primary auditory cortex are less consistent (see for a review Butler & Lomber, 2013; Cardin et al., 2020b; Kral, 2007). Using fMRI, somatosensory stimulation has been shown to strongly recruit primary auditory areas in deaf individuals (Karns et al., 2012), but activations elicited by visual tasks are generally modest or absent, and in many cases, differences between deaf and hearing groups are driven by deactivations in the hearing group (Cardin et al., 2016; Karns et al., 2012). Here we found not only differences between groups, but significantly higher activations in the HEF condition and a correlation with behavioural performance, highlighting the relevance of this plasticity. These results show that crossmodal plasticity and a functional shift towards cognition can indeed occur in primary auditory regions. The only other fMRI study showing a behavioural link between activity in primary auditory areas and behaviour is that of Karns et al., 2012, where the authors found that the intensity of a double-flash visual illusion, elicited by concurrent somatosensory stimulation, correlated with activity in rostrocaudal HG (Area Te1.2). Together, these findings suggest that passive visual stimulation is not enough for activation of HG, but needs additional multisensory stimulation or executive demands, such as in the switching task. There were also notable differences between the patterns of activity observed in HG and secondary regions such as the planum temporale (PT) and pSTC. Activations during the switching condition were indeed of a smaller magnitude in HG than in the PT and pSTC. In the inhibition task, activations in the deaf group were higher in the PT and pSTC than in HG (significant group x ROI interaction). In the planning task, contrary to what was observed in the PT and pSTC, there were no significant correlations with language in HG, nor a positive trend (in agreement with our previous study (Cardin et al., 2016), where we found no significant activation for sign language processing in HG). These differences between primary and secondary areas may arise in part because HG is the first cortical relay of auditory inputs and has stronger subcortical inputs from the thalamus (Kaas et al., 1999). As such, any remaining auditory inputs are likely to have a more prominent effect here than in secondary auditory areas. This explains the effect of the degree of deafness, suggesting that only in those individuals with the most profound degrees of deafness would crossmodal reorganization in HG be observed. The closer vicinity of the PT and pSTC to middle temporal and parietal regions could also be a factor in driving more reorganization in these areas than in HG.
Language proficiency modulates cognitive processing and neural reorganisation
One of our goals was to investigate how language experience influences cognitive processing and neural reorganisation. In our study, deaf individuals were significantly less accurate in the switching task, with switching cost correlating with language scores. These results indicate that differences in task switching performance are driven by language experience, and not by an absence of auditory inputs. As a group, deaf participants also had significantly longer reaction times in all tasks. This is the opposite of what is often found in studies of visual reactivity in deaf individuals (Nava et al., 2008; Pavani & Bottari, 2012), highlighting critical differences in performance between purely perceptual tasks, and those who weigh more strongly on executive demands. Differences in performance in EF tasks have been previously described in studies of EF in deaf children, and they have been found to be associated with language delay (Botting et al., 2017; Figueras et al., 2008; Marshall et al., 2015). Similar results have been found when parental reports were used as an assessment of EF, with early language access having a stronger impact on EF than early access to sound (Hall et al., 2017, 2018). Differences in EF are not typically found in studies of deaf native signers (e.g. Cardin et al., 2018; Marshall et al., 2015), who achieve language development milestones at the same rate as hearing individuals learning a spoken language. The studies exploring the link between EF and language experience in deaf individuals have been conducted mostly in children. The present research demonstrates that the developmental dynamics of the relationship between these two factors are preserved later in life: the level of language proficiency of an adult deaf individual still influences their performance on EF tasks. This evidence emphasises the importance of language development as it can have long-lasting effects on executive processing throughout the lifespan.
Behaviourally, participants with better language scores were also faster in the control condition of the planning task, which involved simple mathematical problem solving (van den Heuvel et al., 2003). Solving arithmetic operations activates the language network (Andin et al., 2015) and mathematical skills have been associated with language proficiency in the general population (Henry et al., 2014; Mestre & Cocking, 1988) and deaf students (Kelly & Milham, 2016). Our study confirms this association and shows that the relationships between language and mathematical skills, and language and planning, also manifest in the brain. In the deaf group, language proficiency was associated with both neural activity and behavioural performance during the execution of both conditions of the planning task. The fact that we see a positive association between the neural activity in the PT and pSTC and the language scores of deaf participants in both the HEF and LEF conditions, with no interactions between them, indicates that what we observe is an effect of language processing, rather than executive processing. Unlike during switching, deaf participants do not recruit the PT and pSTC during the HEF condition of the planning task. The correlation with language reflects a different type of function: the PT and pSTC are involved in linguistic processing in deaf individuals (Cardin et al., 2020a; Cardin et al., 2013; Emmorey et al., 2003, 2011; MacSweeney et al., 2002, 2004). Given that deaf participants with higher scores recruited temporal regions more, we hypothesise that better language skills facilitate the use of linguistic strategies in solving the tasks. Indeed, planning has been linked to private speech, which is essential for developing planning skills in early childhood (Fernyhough & Fradley, 2005; Lidstone et al., 2010; Vygotsky, 1968). Language provides a foundation for planning, goal-directed behaviour and solving simple mathematical problems, being “the medium through which higher-order (if-if-then) rules are formulated” (Pellicano, 2012; Zelazo et al., 2003). Developmental gains in language skills, specifically the ability to formulate hierarchical rules, are directly implicated in the development of EF (Best & Miller, 2010). In deaf individuals, this gain is supported by a larger degree of involvement of the temporal cortices during the planning task.
In summary, we propose that timely development of a first language boosts the overall efficiency of executive processing, regardless of whether the EF task itself allows implementation of purely linguistic mechanisms. Hierarchical rules of the “if-then” type can also be implemented, in an automatic way, during switching. Language can provide the necessary “framework” for these rules to develop and be used in a dynamic task in the most efficient way. Although participants are not required to use linguistic strategies during switching, we speculate that those who have benefited from the efficiency associated with developing such frameworks can invest less cognitive and neural resources into solving this task.
Conclusion
Here we show that executive processing in the adult brain is influenced by early sensory and language experience. While frontoparietal networks are involved in EF in both deaf and hearing individuals, deaf individuals also recruit superior temporal regions that are usually considered “auditory”. This recruitment was specific for switching and correlated with switching cost, suggesting a role of temporal regions in a subcomponent of executive control. Plasticity is linked to the degree of deafness in the primary auditory cortex, but the degree of deafness did not predict performance. These results suggest that the absence of auditory inputs “frees” superior temporal regions to take on functions other than sensory processing. This could be either by preserving a function these areas performed early in childhood or by taking on new functions driven by influences from top-down projections from frontoparietal areas or adjacent temporal and parietal regions. Only developmental neuroscience studies in deaf and hearing children will allow us to dissociate between these possibilities. We show that developmental language experience can lead to varying outcomes for cognitive functions in the adult. Language scores, independently of modality, predicted accuracy in the switching task and reaction times in the control condition of the planning task (simple mathematical operation). Our study offers an insight into the role of language in executive processing by demonstrating how language can provide mechanisms that aid and optimise EF processing. Overall, results from this study suggest different responses in deaf “auditory” temporal areas for executive and language processing. We have previously observed these shared functions, describing an overlap between working memory and language processing in superior temporal areas of deaf adults (Cardin et al., 2020b). By understanding the developmental trajectories of these changes, we can move towards a unified theory of crossmodal plasticity.
Extended Data – Figures
A. Correlations between neural switch cost and RT switch cost. B. Correlations between neural switch cost and %error for accuracy. HG=Heschl’s gyrus, PT=planum temporale, pSTC=posterior superior temporal cortex.
HG=Heschl’s gyrus, PT=planum temporale, pSTC=posterior superior temporal cortex.
Extended Data – Tables
Language, switch costs, and Simon effects in the deaf group
Acknowledgments
The authors would like to specially thank all the deaf and hearing participants who took part in this study. This work was funded by a grant from the Biotechnology and Biological Sciences Research Council (BBSRC; BB/P019994). VV is funded by a scholarship from the University of East Anglia.
Footnotes
Errors in task description of Figure 1 have been corrected. Results have been restructured.