Abstract
The ability to connect the form and meaning of a concept, known as word retrieval, is fundamental to human communication. While various input modalities could lead to identical word retrieval, the exact neural dynamics supporting this convergence relevant to daily auditory discourse remain poorly understood. Here, we leveraged neurosurgical electrocorticographic (ECoG) recordings from 48 patients and dissociated two key language networks that highly overlap in time and space integral to word retrieval. Using unsupervised temporal clustering techniques, we found a semantic processing network located in the middle and inferior frontal gyri. This network was distinct from an articulatory planning network in the inferior frontal and precentral gyri, which was agnostic to input modalities. Functionally, we confirmed that the semantic processing network encodes word surprisal during sentence perception. Our findings characterize how humans integrate ongoing auditory semantic information over time, a critical linguistic function from passive comprehension to daily discourse.
Introduction
Word retrieval is a critical part of human communication. It consists of the ability to access and retrieve the form of a word before articulation. Naming paradigms, which are tasks designed to isolate word retrieval, have become integral to clinical assessment as well as basic research. Typically, a stimulus is presented visually or auditorily and the associated perceptual representations are then transformed into a shared prearticulatory code [1, 2], which is in turn converted into an articulatory plan for overt word production [3, 4]. Clinical settings often employ a battery of naming tasks, commonly including visual naming, verb generation, reading, and repetition [3, 5–8]. These are widely used in intraoperative language mapping [9–14], neuroimaging studies [15–19], neuropsychological assessments [7, 20, 21], and postsurgical outcome evaluation [6, 8, 22–25].
Two decades ago, Hamberger et al. observed that many patients with intact abilities to name visually presented stimuli experienced difficulties in word retrieval during everyday discourse. To address this, they introduced a task called auditory naming, where individuals were prompted to name items based on their descriptions [17, 26–28]. Intraoperative electrical stimulation mapping has provided evidence for a segregation of function wherein regions in lateral temporal cortex were specific to either auditory or visual naming [9–11]. This segregation has been supported by neuroimaging studies showing recruitment of the anterior temporal lobe and the Inferior Frontal Gyrus (IFG)[16, 29]. Recent Electrocorticography (ECoG) studies have gone beyond neuroimaging by providing exact timing of recruitment in frontal cortex during naming tasks. Importantly, IFG has shown enhancement across multiple time periods before articulation. For example, immediately after stimulus onset [30, 31], at stimulus offset [30, 32], prior to response onset [32, 33], and throughout stimulus presentation [34]. It is noteworthy that in tasks that do not engage semantic processing, IFG has shown similar activity preceding speech by 250 ms, implicating its role in articulatory planning [4, 35]. Critically, semantic processing and articulatory planning have shown overlapping activity spatially and temporally. However, previous studies focused on one type of task which could not dissociate these cognitive functions.
To address these issues, we recorded ECoG with excellent combined temporal and spatial resolution signals from a large cohort (N=48) of neurosurgical patients. We isolated motor planning and task-specific retrieval processes before articulation, by leveraging four tasks that elicit the same spoken word but via distinct routes. Using data-driven machine learning techniques we found two distinct spatiotemporal networks that separate articulatory planning from semantic processing across a ventral-dorsal gradient. Spatially, semantic processing spanned dorsal frontal cortices while articulatory planning was limited ventrally. We validated that the dorsal network encodes increased semantic load during sentence presentation, and we showed that the dorsal semantic network is left-lateralized. These findings constitute novel evidence for a hub in dorsal IFG and MFG responsible for semantic integration, a critical linguistic function for behaviors ranging from reading to discourse.
Results
In order to investigate the spatiotemporal dynamics leading up to lexical retrieval, we recorded ECoG signals from 29 (25 left grid, 4 bilateral) participants sampling the left hemisphere across four tasks: visual naming (VN), visual word reading (VWR), auditory naming (AN), and auditory word repetition (AWR). The tasks were designed to produce the same set of words while varying the routes of retrieval, controlling for task modality and semantic access (Fig. 1A). Neural responses were quantified using high gamma broadband (70-150 Hz) (See Methods: Data Acquisition and Preprocessing), a widely applied marker of local cortical activity that correlates with underlying neuronal spike rates and the BOLD response [36]. We first report single-trial activity locked to articulation in three regions of interest: IFG, Middle Frontal Gyrus (MFG), and precentral gyrus. Temporally, we found enhanced responses before articulation across all three regions (Fig. 1B) which exhibited specificity for the semantic tasks compared with control. (one-way t-test: VN vs. VWR: IFG, p <1e-5, t = 7.96; MFG, p <1e-5, t = 20.86; precentral, p <1e-5, t = 9.19; AN vs. AWR: IFG, p <1e-5, t = 16.50; MFG, p <1e-5, t = 25.39; precentral, p <1e-5, t = 7.36, see Methods: Statistical Tests). We confirmed this pattern by focusing on a pre-articulatory window (-500ms - 0ms) within each electrode, showing enhanced responses for the semantic tasks with maximal activation in auditory naming (Fig. 1C). The results provide evidence for the highest neural recruitment in prefrontal cortices during auditory naming.
To statistically evaluate the semantic- and modality-specific factors modulating neural activity before articulation, we employed a neural encoding model. This multilinear model predicts neural activity for each electrode based on two sets of features including task modality (auditory vs. visual) and level of semantic demand (low vs. high, see Methods: Multilinear Factor Model). Our experiment was designed such that each task produces the same words and contains a unique combination of both feature dimensions (Fig. 2A). We first quantified the significance of each feature across electrodes in the pre-articulatory window (-500ms - 0ms). We found that the auditory modality mostly predicted activity in STG (Fig. 2B, top), the visual modality predicted activity in occipital cortex (see Supplementary Fig. S1) and auditory semantics strongly predicted activity across both IFG and MFG (Fig. 2B, middle, showing significant electrodes, FDR corrected q =1e-3). Strikingly, visual semantics dissociated from its auditory counterpart with a more posterior distribution overlapping with MFG (Fig. 2B, middle and bottom). To ensure the effect represents sustained semantic access before articulation, we employed a temporal moving window approach (-750ms - 500ms). Sustained and significant (FDR corrected q =1e-3) modality-specific semantic effects were most prominent from -500 ms to -250 ms across frontal cortices and dissipated by the time of articulation (Fig. 2C). Taken together, these results provide evidence for a posterior to anterior gradient encoding semantic processing for the auditory and visual domains, respectively, necessary for word retrieval.
Given that these results were not limited to specific anatomical regions, we were interested in characterizing the temporal profiles shared by electrodes without imposing functional or spatial assumptions on the neural data. We used an unsupervised Non-Negative Matrix Factorization analysis (NMF, see Methods: Non-negative Matrix Factorization) on the neural responses across all tasks, cortical sites, and participants. This data-driven approach identified five networks with unique temporal profiles inline with sensory, pre-articulatory, and motor functions. Two sensory networks cleanly captured stimulus-induced sensory responses in auditory and visual cortices (see Fig. 3A, Supplementary Fig. S2C). Three major networks were found across frontal cortices (Fig. 3A,B). The first network peaked before articulation, exhibiting enhanced activity for auditory naming relative to the other tasks (p < 1e-5, Kruskal-Wallis test, see Methods: Statistical Tests), and was spatially distributed across IFG and MFG (Fig. 3B, top), similar to the distribution of auditory semantic information we found in Fig. 2B. The second network also peaked before articulation (mean peak timing = -158 ms, SEM = 16 ms); however, activity did not differ across tasks (p =0.173, KruskalWallis test) and was spatially distributed across IFG and speech motor cortex (SMC, i.e. peri-rolandic). Lastly, the third network peaked after articulation (mean = 93 ms, SEM = 11 ms), was spatially distributed across SMC, and did not differ across tasks (p =0.70, Kruskal-Wallis test). Our results demonstrate the existence of two spatially and functionally distinct cortical networks preceding speech within frontal cortex: one task-agnostic, suggesting its role in speech planning, and the other naming-specific, suggesting a role in mapping from auditory to semantic representations.
We were interested in investigating to what degree the observed functional networks were part of the language network. To address this, we expanded our analysis to include a cohort of patients with right hemisphere coverage (N=23, see Methods: Participants Information). We repeated the same clustering approach but using active electrodes from both hemispheres and statistically assessed the degree of hemispheric lateralization (See Methods: Permutation). This analysis replicated our left-hemisphere clustering results by identifying five networks with similar temporal profiles to the first left-hemisphere cluster analysis (see Supplementary Fig. S3B). The speech motor network exhibited a bilateral distribution (Fig. 4A, bottom) and the ratio of recruited active electrodes were not significantly different between the two hemispheres (p =0.1443, permutation test, see Methods: Permutation, Fig. 4B, bottom). In contrast, we found significant asymmetry in the active electrode ratio in both the naming-specific (p <1e-5, permutation test) and pre-articulatory networks (p <1e-5, permutation test) that were overwhelmingly left-lateralized. These results suggest that the left-lateralized naming-specific activity is likely integral to language processing rather than serving as a domain-general component (e.g. working memory, which is bilateral [37]).
In order to understand the underlying representation of these cortical networks, we employed an encoding approach (see Methods: Encoding Model). We constructed a model that predicts neural activity during auditory naming based on three representations (Fig. 5A, see Methods: Encoding Model): acoustic information (i.e. speech envelope), task engagement (i.e. task structure), and surprisal (i.e. incremental word surprisal derived from GPT-2). By calculating the unique variance explained by each model (see Methods: Variance Partitioning), we were able to capture significant representations as depicted in three representative electrodes representing encoding for surprisal (Fig. 5B top), acoustic (Fig.5B, middle) and task (Fig. 5B, bottom) representations (permutation test, p<0.05). Across cortex significant word surprisal (Fig. 5C, top) was predominantly localized in IFG and MFG (left-lateralized) as well as STG (bilaterally). Significant acoustic representation was mainly localized in STG and dorsal precentral gyrus (bilaterally; Fig. 5C, middle). Representation of task engagement was mainly found in precentral gyrus (bilaterally; Fig. 5C, bottom). The left-lateralized spatial distribution of surprisal encoding was consistent with our previous clustering findings (Fig. 4A), and we quantified the surprisal encoding across these networks (Fig. 5D). Word surprisal was significantly greater for the naming-specific network compared with the other frontal networks (Wilcoxon rank sum test p <1e-5, see Methods: Statistical Tests. Speech motor: z = 8.91; pre-articulatory: z = 7.21; speech motor vs. pre-articulatory was not significant: p = 0.31. See Supplementary Fig. S4 for encoding results across all networks). These findings suggest that the naming-specific network mainly encodes word surprisal. Taken together, our results demonstrate that auditory semantic information, particularly concerning word surprisal, is encoded by a frontal network spanning IFG and MFG. This pattern is distinct from the pre-articulatory planning activity associated with speech and is left-lateralized.
Discussion
Word retrieval is a fundamental human ability, critical for daily life communication as well as integral to clinical assessment. Here, we leveraged direct neurosurgical recordings to systematically probe lexical retrieval across clinical naming tasks. We used a controlled word production battery and showed specific recruitment of middle and inferior frontal sites for auditory naming. Across cortex we showed a spatial dissociation between auditory and visual naming. In contrast to previous approaches focusing on region of interest analyses, we leveraged unsupervised clustering and found an emergent organization within frontal cortex supporting auditory naming dorsally and task-agnostic articulatory planning ventrally. Further, we showed that these networks are lateralized to the left hemisphere, implicating them in core language functions. Lastly, using supervised encoding models, we established that dorsal frontal activity best represented semantic integration of words. Taken together, these findings highlight an important role of dorsal prefrontal cortex in semantic integration and auditory-based lexical retrieval.
Multiple studies have demonstrated that patients with minimal impairment in visual naming experienced difficulties in auditory naming [26]. Since the first report by Hamberger & Seidel [26], higher sensitivity for auditory naming has been shown across varying populations including older adults with dementia [38–40] and without [41], as well as pediatric populations [28]. Further, evidence from neurosurgical postoperative assessment has shown language decline following resection of sites essential for auditory naming based on cortical stimulation [11, 22, 32, 34, 42]. While such studies have shown both temporal and frontal naming sites [10–12, 43], reports supporting a dissociation between visual versus auditory naming have been limited to temporal cortex alone [5, 40, 43]. Our findings (Fig. 2) provide direct neurophysiological support for such a dissociation in frontal cortices. While neuroimaging studies have provided evidence for a dissociation of naming modalities within temporal cortices, this has not been reported in frontal cortex, albeit significant auditory naming activity [16, 29, 44]. Intracranial studies have reported frontal activity for both auditory and visual naming [30, 33, 34], with consistently higher activity in auditory naming [30]. However, they did not show a clear dissociation, likely due to the studies’ task design that did not include non-semantic controls and was limited to overall task effects [30]. Our design ensured that the same words were produced in each of the four tasks, allowing for controlled comparison across modality-specific semantic processing. Moreover, the encoding model (Fig. 2) provided a semantic comparison between two naming tasks while controlling for modality-specific effects which can be quite large (see Supplementary Fig. S1B). Further, many previous studies employed a region of interest approach, which can obscure spatially overlapping activation patterns [30, 32, 33]. Our unsupervised approach found auditory naming-specific activity distributed across IFG and MFG. While our results are in line with previous ECoG findings, the controlled task design and encoding model enabled us to reveal a finer-grained distinction of modality-specific recruitment.
Our study identified a network encompassing IFG and MFG that encodes semantic integration during sentence perception. Previous studies have reported bilateral prefrontal activity during perception and response periods in both linguistic and nonlinguistic tasks [45, 46]. A possible candidate for explaining this activity, working memory, has been ruled out by Kambara et al., who showed that auditory working memory mainly resulted in precentral gyrus activity [33]. Further, working memory in general [37], and verbal auditory working memory specifically have been shown to be bilateral [47]. In contrast to working memory findings, our results provide strong evidence for a left-lateralized activity cluster (Fig. 4), implicating it in higher-order language functions. Further, we probed the functional representation of this activity and found evidence supporting its role in integrating word surprisal during sentence processing (Fig. 5). This finding is in line with word position sensitivity observed in serial visual word integration [48, 49]. However, our findings are specific to the auditory modality and are more spatially segregated. Taken together, our data suggest that auditory semantic integration occurs across a distributed network spanning IFG and MFG, and is likely recruited during daily conversation [26].
In contrast to this dorsal network, our ventral finding is task agnostic and peaks before articulation. This is in line with the findings of IFG’s role in speech arrest [50–52] as well as pre-articulatory planning [4]. Recent studies have highlighted the important role of precentral gyrus in speech planning [53, 54], consistent with our results showing peak activation of precentral gyrus before articulation. Finally, our spatial dissociation between pre-articulatory and semantic networks (Fig. 3) suggests a frontal semantic node might be missing in current language models that argue for a multi-stream process [55, 56].
The capacity to access words is critical for daily discourse communication and integral to clinical assessment. Our findings demonstrate the recruitment of a dorsal prefrontal network specific for this function. Our unsupervised approach underscores that left-lateralized language functions are represented across cortical regions and are not limited to traditional boundaries. Furthermore, our encoding analyses suggest that semantic auditory integration is primarily supported by activity in frontal sites. Collectively, these results highlight a new perspective on the function of the dorsal prefrontal cortex in integrating information for communication.
Methods
Participants Information
All experimental procedures were approved by the New York University School of Medicine Institutional Review Board. We used neurosurgical recordings from 48 patients (20 females, mean age: 28.0 ± 12.59 yo, 18 right grid, 25 left grid, 4 bilateral hemisphere coverage, and 1 stereo coverage. See Supplementary Table S1: Partici-pant Demographics) undergoing neurosurgical evaluation for refractory epilepsy were included in the analysis. Patients implanted with subdural and depth electrodes provided informed consent to participate in the research protocol. Electrode implantation and location were guided solely by clinical requirements. We have 5 patients who consented separately for higher density clinical grid implantation, which provided denser sampling of the cortex. Surface reconstructions, electrode localization, and Montreal Neurological Institute coordinates were extracted by aligning a postoperative brain Magnetic Resonance Imaging (MRI) to the preoperative brain MRI using previously published methods [57].
Experiment Setup
Participants were instructed to complete four tasks to produce the same target words in response to certain auditory or visual stimuli (Fig. 1A): visual naming (VN, overtly name a word based on drawings), visual word reading (VWR, overtly read the word presented on the screen), auditory naming (AN, overtly name a word based on auditorily presented description), and auditory word repetition (AWR, overtly repeat an auditorily presented word) [58]. Participants spontaneously produced the answers without cueing or waiting. The stimuli were randomly interspersed within the block. Visual naming, visual word repetition, and auditory word repetition are repeated twice.
Visual stimuli for visual naming and visual word reading were presented using a laptop (15.6” Retina screen) placed in front of participants at a comfortable distance (0.5m – 1.0m). Auditory stimuli were presented via a speaker placed in front of the presentation laptop. Onsets and offsets of stimuli were detected via analog channels for time-sync purpose: photodiode for visual stimuli and trigger for auditory. Participants responded by speaking into a microphone, with verbal response times extracted from an analog microphone channel.
Data Acquisition and Preprocessing
Neural signals were recorded at 2048 Hz with the Nicolet system and were decimated to 512 Hz during export. Electrodes were inspected by epileptologists, and electrodes with epileptiform activity or line artifacts were removed from further analysis. The data were then referenced to a common average by averaging the clean signal across all electrodes and subtracting the common average signal from each electrode. Continuous data was divided into epochs based on the onset of stimulus (lock to stimulus onset) or onset of speech (locked to articulation). Articulation onsets were corrected under manual inspection of the audio recordings that are time-synced with the neural recordings. Trials in which participants did not respond or the reaction time was >3 standard deviations above the mean reaction time over all trials within each task were removed from analysis as bad trials.
Our analysis of the electrophysiology signals focused on changes in analytic amplitude signal (70–150 Hz). To quantify changes in the high gamma range, data were bandpassed with 8 bands of logarithmically distributed subbands from 70 and 150 Hz and averaged. This multi-band extraction method was used as it avoided the dominance of lower frequency [59, 60]. Analytic amplitude was calculated by taking the absolute value of the Hilbert transform of the filtered signal. The data were then normalized into percent change from baseline with [-250ms, -50]ms prestimulus interval.
Analysis Pipeline
Task-active Electrode Selection
For each electrode and each task, we computed the analytic amplitude for three cognitive stages: perception (0ms - 500ms locked to stimulus), pre-articulatory (-500ms - 0ms locked to articulation), and articulation (0ms - 500ms locked to articulation). An electrode is considered active if both the maximum value of the mean across trials for any task or period is greater than 50%, and the trial activity is significantly above zero for over 100 ms for either task by computing t-tests across trials against zero. We took the union of active electrodes across tasks and stages for further analysis.
Multilinear Factor Model
We fit a multilinear model to test the factors predicting neural activity of individual electrodes. This multilinear encoding model for each electrode predicts neural activity based on the main and interaction effects of the features (fitlm function, Matlab). For each electrode n, the activity y is modeled as where y is a vector with the length of total number of trials denoting the mean high gamma activity over the current time window t. x is the feature given to the property of the tasks, and βs are the weights of each feature being estimated. xmod is the binary modality feature that encodes modality, where acoustic is denoted by 1 and visual denoted by 0 (VN, VWR = 0, and AN, AWR = 1). xaudsem is the binary semantic feature encoding auditory semantics (AN = 1, VN, VWR, AWR = 0, see Fig. 2A). xvissem is the binary semantic feature encoding visual semantics (VN = 1, AN, VWR, and AWR = 0). For analysis in Fig. 2B, we chose the time window -750 ms to -250 ms locked to articulation. For analysis in Fig. 2C, we employed a moving window of 250 ms with a hop of 250 ms. The model returns t and p-values for each β. We FDR corrected p-value by Benjamini and Hochberg method (mafdr function, Matlab). Electrodes are considered significant using FDR with q = 0.001.
Non-negative Matrix Factorization
We used non-negative matrix factorization (NMF, nnmf function, Matlab) as a soft clustering technique to cluster the neural data. The data matrix A is arranged in the shape of time by electrodes, where the time is composed of concatenated neural activity in perception period (0ms - 500ms), pre-articulatory period (-500ms - 0ms), and articulation (0ms - 500ms) across trials and tasks. Data were downsampled from 512 Hz to 125 Hz (after high gamma extraction) considering computational efficiency before concatenation. Since the analytic amplitude of high gamma computed with multiband extraction is a low-frequency signal, downsampling does not affect the spectra.
NMF algorithm
Given a nonnegative matrix A with shape u × v, we found nonnegative factor matrices W (u × k) and H (k × v) such that A ≈ WH. NMF decomposes the data matrix into two matrices, W and H, by minimizing the cost function. We determined the optimal number of clusters k by identifying the elbow point (i.e. the point where the rate of variance (R2) decreases sharply levels off. See Supplementary Fig. S2A, Supplementary Fig. S3A). Variance is defined as R2 = 1 − SSEresidual/SSEtotal, where Sum of Squared Errors (SSE) is calculated with SSEresidual = ||A−WH||2F, with ||·||2F denotes the matrix Frobenius norm that calculates the sum of squared error. Total SSE is defined as SSEtotal = ||A−mean(A).2||2F. Then, we assigned each electrode to the cluster of its maximum contribution for visualization and future analysis.
Encoding Model and Variance Partitioning
We employed the multivariate temporal response function (mTRF) to relate continuous neural signals and functional stimuli [61, 62]. The continuous neural recordings of the auditory naming task was downsampled (after high gamma extraction) to 125 Hz for computational efficiency. For each electrode, the neural activity is represented as a vector with length T, the entire duration from the start of the first trial to the end of the last trial. Three models were constructed to represent different aspects of information Fig. 5A: Acoustic model: The acoustic model is constructed by calculating the average of the auditory stimuli spectrogram. Task engagement model: The task engagement model is a binary model constructed based on the stimulus onset and response offset. For each trial, the model marks one for stimulus onset to response offset, and zero from response offset to next stimulus onset. Semantic integration model: We use word surprisal as a measure for semantic load to predict neural processing of higher-level linguistic information in speech [63–66]. Here, we constructed the semantic integration model by calculating the increasing surprisal of each word added to the sentence with GPT-2 model [67]. For a sentence with n words, the stepwise increment of surprisal is calculated by first getting n surprisal values from calculating the first 1, first 1 and 2 words, until 1 to n words. Each model is a vector with length T, where T is the duration of auditory naming task, with a sampling rate of 125 Hz.
The input of three models was normalized to obtain the same magnitude ranging from [0, 1] to avoid weighting bias. The mTRF toolbox [61] was employed to estimate R2 for each model. We used different combinations of the three models to calculate individual contributions. We assumed the output of the system is related to the input via linear convolution. For every electrode, the output is predicted by n features, and the instantaneous neural response y(t, n), sampled at times t = 1, · · ·, T, consists of a convolution of the stimulus property, s(t), with an unknown electrode-specific temporal receptive field (TRF), w(τ, n). ɛ is the residual response at each electrode not explained by the model. The range for the time lag was set from 0 ms to 400 ms. To prevent overfitting, we used ridge regression with L2 regularization with 4-fold cross-validation and optimized hyperparameters to maximize the mutual information between actual and predicted responses. This encoding model was fit for all active electrodes.
Variance partitioning was performed to separate the unique contribution of each model. To quantify the contribution of different stimulus features to neural activity, we estimated the variance explained (R2) uniquely by each model and the variance explained by intersections of various combinations of these models (see Fig. 5A, right). The R2 of the sole contribution from one model could be estimated. Set theory was then employed to calculate common (as a set intersection) and unique (as a set difference) variances explained [68, 69].
The statistical significance of the encoding model was assessed using a permutation test. We randomized the continuous neural data by dividing it into 10-second bins, each containing temporal segments with comprehensible acoustic information, and shuffled the order of these bins. This procedure was repeated 10,000 times, and the variance at the 95th percentile was used as the threshold for significance.
Statistical Tests
Unpaired one-way t-test (Fig. 1B) was used to compare whether the neural activity is different from naming tasks and their controls. We first calculated the average neural activity within the pre-articulatory window (-500ms - 0ms) for each trial. The values were compared between naming tasks and their control tasks (i.e. VN vs. VWR; AN vs. AWR) across regions of interest (ttest function, Matlab).
Kruskal-Wallis test (Fig. 3B) was conducted to test whether the distribution of neural activity is different across tasks. We first averaged the neural activity for each electrode across trials within the pre-articulatory window (-500ms - 0ms). For each cluster, Kruskal-Wallis test (kruskalwallis function, Matlab) was used to determine whether data in each task comes from the same distribution. If significant, post hoc analysis was done to determine which pair drives the effect.
Wilcoxon rank sum test (Fig. 5D) was computed to test whether the distribution of unique variance is different between two functional networks (signrank function, Matlab).
Bootstrapping and Permutation for Laterality
To evaluate the laterality of word retrival, we implemented bootstrapping (bootci function, Matlab) to obtain the confidence interval (CI) of the ratios of active electrodes in left and right hemispheres. Bootci computes a 95% CI of ratios of active electrodes for each hemisphere and cluster, with the number of bootstrap samples set to 10000. The statistical significance of lateralization was assessed using a permutation test.
We hypothesized an equal active ratio in the left and right hemispheres. For each cluster, we first calculated the active ratio of electrodes in both hemispheres by dividing the number of electrodes belonging to the specified cluster by the total number of electrodes in each hemisphere. For permutation, the active electrodes in both hemispheres was combined and shuffled to obtain a random distribution of active electrodes in total population of both hemispheres. Then, vectors representing left and right hemispheres are split and the new active ratio was calculated. This procedure was repeated 10,000 times. We compared the distribution with the actual active ratio by calculating the p-value (i.e. the percentile of true value in the permutation results).
Data and Code availability
The data set generated during the current study will be made available from the authors upon request and documentation is provided that the data will be strictly used for research purposes and will comply with the terms of our study IRB. The code is available upon publication at https://github.com/flinkerlab/.
Declarations
The authors declare that they have no competing interests.
Supplementary Information
Acknowledgments
This work was supported by National Institute of Health grants R01NS109367, R01NS115929, and R01DC018805 (A.F.).
References
- [1].↵
- 2.↵
- [3].↵
- [4].↵
- [5].↵
- [6].↵
- [7].↵
- [8].↵
- [9].↵
- [10].↵
- [11].↵
- [12].↵
- [13].
- 14.↵
- [15].↵
- [16].↵
- [17].↵
- [18].
- [19].↵
- [20].↵
- [21].↵
- [22].↵
- 23.
- 24.
- [25].↵
- 26.↵
- [27].
- [28].↵
- 29.↵
- 30.↵
- [31].↵
- [32].↵
- [33].↵
- [34].↵
- [35].↵
- [36].↵
- [37].↵
- [38].↵
- [39].
- [40].↵
- 41.↵
- [42].↵
- [43].↵
- [44].↵
- [45].↵
- [46].↵
- [47].↵
- [48].↵
- [49].↵
- 50.↵
- [51].
- [52].↵
- [53].↵
- [54].↵
- [55].↵
- [56].↵
- [57].↵
- [58].↵
- [59].↵
- [60].↵
- 61.↵
- [62].↵
- [63].↵
- [64].
- [65].
- [66].↵
- [67].↵
- [68].↵
- [69].↵