Abstract
The neural basis of reading is highly consistent across a variety of languages and visual scripts. An unanswered question is whether the sensory modality of symbols influences the neural basis of reading. According to the modality-invariant view, reading depends on the same neural mechanisms regardless of the sensory input modality. Consistent with this idea, previous studies find that the visual word form area (VWFA) within the ventral occipitotemporal cortex (vOTC) is active when blind individuals read Braille by touch. However, connectivity-based theories of brain function suggest that the neural entry point of written symbols (touch vs. vision) may influence the neural architecture of reading. We compared the neural basis of the visual print (sighted n=15) and tactile Braille (congenitally blind n=19) in proficient readers using analogous reading and listening tasks. Written stimuli varied in word-likeness from real words to consonant strings and non-letter shape strings. Auditory stimuli consisted of words and backward speech sounds. Consistent with prior work, vOTC was active during Braille and visual reading. However, in sighted readers, visual print elicited a posterior/anterior vOTC word-form gradient: anterior vOTC preferred larger orthographic units (words), middle vOTC preferring consonant strings, and posterior vOTC responded to shapes (i.e., lower-level physical features). No such gradient was observed in blind readers of Braille. Consistent with connectivity predictions, in blind Braille readers, posterior parietal cortices (PPC) and parieto-occipital areas were recruited to a greater degree and PPC contained word-preferring patches. Lateralization of Braille in blind readers was predicted by laterality of spoken language, as well as by reading hand. These results suggested that the neural basis of reading is influenced by symbol modality and support connectivity-based views of cortical function.
Highlights
Only sighted but not blind (Braille) readers show a posterior/anterior vOTC lexicality gradient
Posterior parietal cortex distinctively contributes to Braille reading.
Lateralization of spoken language and reading hand predict lateralization of Braille
The sensory modality of written symbols influences the neural basis of reading
Introduction
Written language is among the most impressive human cultural achievements. The capacity to record and transmit information over space and time has enabled the accumulation of scientific, technological, and historical knowledge across generations and continents. How does the human brain accommodate this cultural invention, which emerged only approximately 5,000 years ago?
Despite being a recent cultural invention, the neural basis of reading is highly consistent across a variety of languages and visual scripts, including alphabetic, logographic (e.g., Chinese), and syllabic writing systems (e.g., Japanese Kana) (Bolger, Perfetti, & Schneider, 2005; Feng et al., 2020; Hu et al., 2010; Krafnick et al., 2016; Nakamura et al., 2012; Rueckl et al., 2015). All of these reading systems engage regions within the left lateral ventral occipitotemporal cortex (vOTC) (Baker et al., 2007; Cohen et al., 2000; Dehaene & Cohen, 2011; Dehaene et al., 2010). A region in the left lateral vOTC has been termed the ‘visual word form area’ (VWFA) because of its preferential response to written words and letter combinations over other visual stimuli.
The VWFA is situated within a posterior/anterior processing gradient. During reading, visual symbols are first processed by early visual cortices and posterior portions of vOTC, which represent simple visual features (e.g., line junctions) (Dehaene, Cohen, Sigman, & Vinckier, 2005; DiCarlo & Cox, 2007). By contrast, the middle and anterior potions of lateral vOTC are specialized for progressively larger orthographic units, from written letters, letter combinations/bigrams, and finally whole words (Binder, Medler, Westbury, Liebenthal, & Buchanan, 2006; Cohen et al., 2000; Dehaene, Cohen, Sigman, & Vinckier, 2005; Dehaene et al., 2004; Glezer, Jiang, & Riesenhuber, 2009; Lerma-Usabiaga, Carreiras, & Paz-Alonso, 2018; Purcell, Shea, & Rapp, 2014; Vinckier et al., 2007).
An open question is whether the vOTC posterior/anterior processing stream is the only way for the brain to implement reading and, relatedly, why the neural basis of reading takes this particular form. Examining the neural basis of tactile Braille offers unique insights into these questions. Specifically, we can ask whether and how the sensory modality of written symbols influences the neural basis of reading.
Tactile Braille reading achieves similar behavioral goals for people who are blind as visual print reading does for the sighted: rapid access to linguistic meaning from a temporally stable symbolic record. Proficient blind readers can read upwards of 200 words per minute by passing the fingers along lines of Braille text, in which words are written as patterns of raised dots (Millar, 2003). Each Braille character consists of dots positioned in a three-rows-by-two-columns matrix. A single Braille character can be used to represent a letter, number, or punctuation mark. In the most commonly used form of English Braille (Grade 2 Braille), Braille characters also stand for frequent letter combinations (e.g., EA, OW) and whole words (e.g., e = every, tm = tomorrow) (http://www.brl.org) (Millar, 2003).
Consistent with a modality-invariant view of reading, several recent studies have reported that the neural basis of Braille reading and that of visual print reading depend on similar vOTC mechanisms (Büchel, Price, & Friston, 1998a; Debowska et al., 2016; Rączy et al., 2019; Reich, Szwed, Cohen, & Amedi, 2011). Visual print and tactile Braille reading elicit activation peaks at the anatomical location of the ‘VWFA’ in both sighted and blind readers (Debowska et al., 2016; Dzięgiel-Fivet et al., 2021; Kim, Kanjlia, Merabet, & Bedny, 2017; Rączy et al., 2019; Reich et al., 2011; Siuda-Krzywicka et al., 2016). In sighted adults who are trained to recognized Braille words, transcranial magnetic stimulation (TMS) to the VWFA disrupts reading accuracy (Bola et al., 2019; Siuda-Krzywicka et al., 2016). A recent study also found similar repetition suppression effects in vOTC for tactile (blind readers) and visual (sighted readers) pseudowords (Rączy et al., 2019). This evidence supports the idea that reading depends on the same neural mechanisms in vOTC, regardless of symbol modality (i.e., touch vs. vision).
At the same time, both theoretical considerations and empirical evidence suggest that the neural basis of tactile Braille and visual print reading may differ in important ways that have not been fully tested. In sighted readers, posterior portions of vOTC receive visual written forms from early visual cortices and pass this information along the posterior/anterior orthographic gradient (e.g., Dehaene et al., 2005). By contrast, in people who are blind, Braille information enters the cortex at primary somatosensory cortex (S1), making a posterior/anterior gradient unlikely. A number of imaging studies also find that Braille reading activates visual areas outside of vOTC in people who are blind, including V1 and dorsal occipital areas (Cohen et al., 1997, 1999; Gizewski, Gasser, De Greiff, Boehm, & Forsting, 2003; Kupers et al., 2007; Melzer et al., 2001; Sadato et al., 1998, 1996). TMS to the occipital pole and midoccipital cortex disrupts Braille reading (Cohen et al., 1997, 1999; Kupers et al., 2007). This suggests that vOTC may not make a unique contribution to Braille reading in the same way that it does to visual reading.
Moreover, visual cortices of people who are born blind, including vOTC and early visual areas (V1-V3), are recruited for non-visual functions apart from Braille (e.g., Amedi, Raz, Pianka, Malach, & Zohary, 2003; Büchel et al., 1998b; Burton, Snyder, Diamond, & Raichle, 2002; Gougoux, Zatorre, Lassonde, Voss, & Lepore, 2005; Kanjlia, Loiotile, Harhen, & Bedny, 2021; Kujala, Alho, Paavilainen, Summala, & Naatanen, 1992; Sathian, 2005). Particularly relevant for the neural basis of Braille, large swaths of blind ‘visual’ cortex, including portions of V1, participate in processing spoken language, including high-level semantic and grammatical information (Bedny, Pascual-Leone, Dodell-Feder, Fedorenko, & Saxe, 2011; Bedny, Richardson, & Saxe, 2015; Burton, Snyder, Diamond, & Raichle, 2002; Noppeney, Friston, & Price, 2003; Röder, Stock, Bien, Neville, & Rösler, 2002; Watkins et al., 2012). Indeed, there is evidence that the anatomical location of the ‘VWFA’ shows larger responses to spoken language and responds to the grammatical structure of spoken sentences in people who are blind, more so than in people who are sighted (Dzięgiel-Fivet et al., 2021; Kim et al., 2017). This pattern suggests possible involvement in high-order language processing, rather than a reading-specific role in blindness. Furthermore, since the anatomical distribution of written language is believed to be influenced by the anatomical distribution of spoken language (Behrmann & Plaut, 2013; Hannagan & Grainger, 2013; Saygin et al., 2016; Stevens, Kravitz, Peng, Tessler, & Martin, 2017a), recruitment of the visual cortex for language processing could itself modify the neural basis of Braille reading in blind people. For example, we might expect Braille to recruit occipital regions that are connected to visual networks recruited for spoken language. Together this evidence suggests that the anatomical distribution and function role of visual cortices in blind Braille readers and sighted visual readers may not be equivalent and merits further investigation.
There are also reasons to hypothesize that tactile Braille reading may differentially recruit networks outside of the visual system, specifically the posterior parietal cortex (PPC). The vOTC occupies a key connectivity position in sighted readers, in that it is connected to visual input on the one hand and linguistic representations on the other (Barttfeld et al., 2018; Bouhali et al., 2014; Hannagan, Amedi, Cohen, Dehaene-Lambertz, & Dehaene, 2015; Li, Osher, Hansen, & Saygin, 2020; Saygin et al., 2016; Stevens, Kravitz, Peng, Tessler, & Martin, 2017b; Yeatman, Rauschecker, & Wandell, 2013). The PPC arguably occupies an analogous connectivity-based position for tactile Braille. Not only is PPC anatomically proximal and densely connected to early somatosensory cortices (SMC) but like anterior/lateral vOTC, it is connected to language and working memory systems (Burks et al., 2017; Duhamel, Colby, & Goldberg, 1998; Kaas, 2012; Lewis & Van Essen, 2000; Ruschel et al., 2014). Analogous to the functional role of vOTC in visual shape recognition, the PPC furthermore plays a key role in tactile shape and texture perception, pertinent to Braille recognition (Bauer et al., 2015; Hegner, Lee, Grodd, & Braun, 2010). For example, stronger PPC activity is observed during tactile pattern discrimination compared to vibrotactile detection (Hegner et al., 2010). We therefore hypothesized that portions of PPC may specialize for tactile Braille letter and word recognition, analogous to specialization for visual word form recognition within the vOTC of sighted print readers. To our knowledge, the hypothesis of selective responses to Braille words in PPC has not previously been tested. Although previous studies have examined activity in early SMC and found expanded finger representations in proficient Braille readers, there is no evidence that this plasticity reflects specialization for Braille letters and words (Burton, Snyder, Conturo, et al., 2002; Burton, Sinclair, & McLaren, 2004; Kupers et al., 2007; Pascual-Leone et al., 1993; Pascual-Leone & Torres, 1993; Sadato et al., 1998). One goal of the current study was therefore to test whether any portion of PPC shows preferential responses to Braille letters and words in blind readers of Braille, akin to specialization for visual letters and words found in vOTC of sighted readers.
Finally, we hypothesized that lateralization patterns of Braille (blind) and visual print (sighted) reading would be analogous but distinct. The reading network is typically strongly left-lateralized in sighted people, like the spoken language network (Behrmann & Plaut, 2020; Ossowski & Behrmann, 2015; Schlaggar & McCandliss, 2007; Seghier & Price, 2011; Vinckier et al., 2007). Studies with sighted people who have right-lateralized spoken language responses find that reading ‘follows’ spoken language into the right hemisphere (Behrmann & Plaut, 2020; Cai, Lavidor, Brysbaert, Paulignan, & Nazir, 2008; Cai, Paulignan, Brysbaert, Ibarrola, & Nazir, 2010; Cai & Van der Haegen, 2015; Van der Haegen, Cai, & Brysbaert, 2012). In people who are blind, left-lateralization of spoken language is reduced and highly variable across individuals (Lane et al., 2017; Röder, Rösler, & Neville, 2000; Röder et al., 2002) We therefore hypothesized that responses to Braille would be likewise less left-lateralized in blind readers and would show co-lateralization with spoken language across individuals.
A further potential determining factor of Braille lateralization that does not arise for visual print, is reading hand. In visual reading, information typically enters through both eyes and is projected to both hemispheres. By contrast, in the case of Braille, it is possible for the information to enter the left or the right hemisphere first, depending on the reading hand. Reading hand preferences and reading styles differ widely across proficient blind Braille readers (Millar, 1984, 2003).
Many blind readers use both hands during naturalistic reading, however, one hand is thought to track position on the page, while the other is used for word recognition (Millar, 2003). We hypothesized that during single hand Braille reading, lateralization in early somatosensory cortices would depend on which hand was used during word recognition, but that the effect of reading hand would weaken in posterior parietal reading regions and would disappear in language regions (Lane et al., 2017).
To test these predictions, we compared the neural basis of reading in proficient congenitally blind and sighted readers using analogous reading and spoken language tasks. In the reading tasks, participants were presented with words, consonant strings, and non-letter shapes/false fonts. Reading stimuli were visual (print) for the sighted participants and tactile (Braille) for the blind participants. In the spoken language task, both groups listened to audio words and backward speech sounds. First, we tested the prediction that there is a posterior-to-anterior gradient in preference from false-fonts to consonant strings and finally words in the vOTC of sighted but not blind readers. Previous studies find that posterior vOTC responds as much or more to false fonts as to letters and words, with only a small lateral/anterior portion (so-called VWFA) being selective to written words and letters (Vinckier et al., 2007). By contrast, we predicted that in blind readers, the entire extent of vOTC would show a preference for words, consistent with its involvement in language processing (Kim et al., 2017; Lane, Kanjlia, Omaki, & Bedny, 2015; Röder et al., 2002; Watkins et al., 2012). Next, we tested the hypothesis that the PPC of blind Braille readers shows a functional profile analogous to the vOTC of the sighted: selective responses to written words as opposed to tactile shapes in a subset of PPC, surrounded by equal or greater responses to tactile shapes. We compared responses in PPC with those of early SMC, where we would expect larger or equal responses to tactile shapes. Moreover, we hypothesized that regions of PPC most distal from S1 and posterior to it are most likely to show specialization for Braille letters and words, an anterior/posterior gradient analogous to the posterior/anterior gradient observed in the vOTC of sighted readers. We also examined responses across groups in other cortical areas previously implicated in reading: left inferior frontal cortex (IFC) and primary visual cortex (V1) and used whole-cortex analyses to quantify the anatomical distribution of visual and Braille reading (Burton, Snyder, Conturo, et al., 2002; Harold Burton, Sinclair, & Agato, 2012; Rueckl et al., 2015; Sadato et al., 1998). Finally, we used laterality index analyses to compare lateralization patterns across written and spoken word comprehension in the two groups. We tested the prediction that lateralization of reading would be driven by the lateralization of spoken language in higher-order language regions (left IFC), by reading hand in early SMC, and by both factors in reading-related areas (PPC).
Method
Participants
Nineteen congenitally blind (12 females, mean age = 40.36 years, SD = 14.82) and 15 sighted control (9 females, mean age = 23 years, SD = 6) participants took part in the task-based fMRI experiment (see Table 1 for participant characteristics). The data from 10 blind and 15 sighted participants have been reported previously (Kim et al., 2017). All participants were native English speakers, and none had suffered from any known cognitive or neurological disabilities (screened through self-report). Sighted participants had normal or corrected to normal vision. All the blind participants had at most minimal light perception from birth. Blindness was caused by pathology anterior to the optic chiasm (i.e., not due to brain damage). All blind participants were fluent Braille readers who began learning Braille at an average age of 4.6 years (SD = 1.49) and rated their reading ability as proficient to expert (mean = 4.57, SD =0.69 on a scale of 1 to 5) and reported reading on average 20 hours per week (SD=19). We obtained information on Braille-reading hand dominance, whether they read bimanually, and reading frequency through a post-experimental survey conducted over the telephone with 17 of the 19 blind adult participants (Table 1). All participants gave informed consent according to procedures approved by the Johns Hopkins Medicine Institutional Review Board.
Stimuli
The fMRI task including reading and listening tasks (Figure 1). There were three stimulus conditions for the reading task: words, non-word consonant strings, and non-letter shapes (control condition). During the reading task, stimuli were visual for the sighted participants and tactile for the blind participants. For the listening task, there were two conditions: words and backward speech sounds (control condition).
The word stimuli consisted of 240 common nouns, verbs, and adjectives. For the tactile reading task (blind group), the Braille words were written in Grade-II contracted English Braille, which is the most common form of Braille in the United States. Braille characters contain between 1-6 raised pins in set positions within a 2 x 3 array (see Figure 1). In Grade-II contracted English Braille, there are contractions such that single Braille characters represent frequent letter combinations (e.g., “th”) or frequent whole words (e.g., the “c” can stand for “can”). With contractions, the Braille words were on average 4 Braille characters (range = 1-8 Braille characters, SD = 2.1 characters) and 11 tactile pins per word. Note that each participant was presented with 120 of the 240 words during the reading task; the other 120 words were presented auditorily during the listening task (see below). The word lists were counterbalanced across participants. In the tactile consonant string condition, there were 24 strings repeated 5 times throughout the experiment. Each string stimulus consisted of 4 Braille letters, which were created using 20 English consonants. Last, the tactile control stimuli consisted of 24 unique strings of 4 non-letter shapes made of Braille pins (see Figure 1). Note that any dot array within a 2 x 3 grid could be part of a Braille character. Therefore, to prevent participants from processing the shapes as Braille letters, the shapes varied in size and pin number within arrays ranging in size from 4 × 5 to 7 × 7. The average number of Braille pins per string in the control condition was 58.
For the sighted group, the word stimuli consisted of 240 common nouns, verbs, and adjectives that were on average 4 letters long (range = 3-5 letters, SD = 0.7 letters). Visual word stimuli consisted of a new set of words matched to the Braille words on average character length (i.e., 4 visual letters matched to 4 Braille characters), raw frequency per million, averaged frequency per million of orthographic neighbors, and averaged bigram frequency (all comparisons p > 0.4, obtained from the MCWord Orthographic Wordform Database; Medler & Binder, 2005).
Different groups of words were used for the visual and Braille experiment to enable character length matching since Braille contractions represent two or more English letters with a single Braille character. Like the blind participants, sighted participants encountered half (120) of the words during reading trials and the other half during auditory trials, counterbalanced across participants. The visual consonant strings were the same 24 consonant letter combinations from the tactile consonant strings described above. Lastly, the control stimuli in the visual reading task were 24 unique strings, each comprised of 4 characters, which were false fonts. There were 20 false font characters in total, which matched the 20 English consonants on the number of strokes, presence of ascenders and descenders, and the stroke thickness.
The stimuli for the listening task were taken from each group’s respective word list. For the audio word condition, stimuli were 120 words taken from the reading task described above. For a given word, half of the participants received it in the reading task and half received it in the listening task. The auditory words were recorded by a female native English speaker. The average word length was 5 letters long (SD = 1.4 letters). The average playtime for the auditory stimuli was 0.41 s long (SD = 0.3 s). The control auditory stimuli comprised backward speech sounds, which were created by playing each audio word in reverse.
Procedure
The experiment had a total of 5 runs, each with 20 task trials. In each trial, participants were presented with a block of 6 stimuli from a single condition (e.g., tactile reading consonant strings condition) and then performed a memory probe task. All stimulus conditions for both reading and listening trials were presented in every run. Each condition was repeated 4 times per run, and the order of conditions was counterbalanced across runs. There were 6 rest periods (16 s) throughout each run. One sighted participant and two blind participants were excluded from behavioral analysis due to failure to record their responses.
For the blind participants, each trial began with a 0.5 s auditory cue instructing participants to “Touch’ (reading trial), or “Listen” (listening trial). Then participants felt or heard blocks of 6 target items, one at a time. For 10 of the blind participants, tactile target stimuli were presented on the Braille display for 2 s, followed by a 0.75 s inter-stimulus interval (ISI) (6-item list duration: 16.5 s) (Kim et al., 2017). For the newly added 9 blind participants, the ISI was lengthened to 1.75 s due to a coding error which caused the 6-item list duration to be prolonged to 22.5 s. Control analyses revealed no effects of ISI duration on the results and the data are henceforth combined. After the 6-item list had been presented, there was a short delay (0.2 s), followed by a beep (0.5 s). Then a probe stimulus (2 s) was then presented, and participants indicated with a key press whether or not the probe had been present in the list. Participants had 5.3 s to make a response. The participants were asked to read with their dominant hand and responded with the other hand. The listening task was analogous in format to the reading task. The audio words and backward speech were on average 0.41 s long. The timing and sequence of events were identical for the listening task (6-item list duration 16.5 s).
For sighted participants, the trial event sequence (cue, 6-item block, beep, probe, response) was analogous to above. Each trial began with an auditory cue instructing participants to “Look” (reading trial) or “Listen” (listening trial). During reading trials, 6 visual stimuli appeared centrally for 1 s each, followed by an ISI of 0.75, during which participants were asked to maintain gaze on a black central fixation cross (total block duration: 10.5 s). Note that visual reading blocks were shorter than tactile reading blocks for the blind participants because pilot testing indicated that visual reading is faster under these conditions. Listening trials also had a total stimulus block duration of 10.5 s, to be consistent with the reading trials within the sighted group.
fMRI data acquisition
Functional and structural images were acquired using a 3T Phillips scanner at the F. M. Kirby Research Center. T1-weighted images were collected using a magnetization-prepared rapid gradient-echo (MP-RAGE) in 150 axial slices with 1 mm isotropic voxels. Functional BOLD scans were collected T1-weighted structural images were collected in 150 axial slices with 1 mm isotropic voxels. Functional BOLD scans were collected in 36 sequential ascending axial slices. TR = 2 s, TE = 0.03 s, flip angle = 70°, voxel size = 2.4 × 2.4 × 2.5 mm, inter-slice gap = 0.5 mm, field of view (FOV) = 192 × 172.8 × 107.5. Acquisition parameters were identical for the resting-state and task fMRI experiment.
fMRI data analysis
Preprocessing and whole-cortex analysis
Analyses were performed using FSL (version 5.0.9), FreeSurfer (version 5.3.0), the Human Connectome Project workbench (version 1.2.0), and custom in-house software. The cortical surface was created for each participant using the standard FreeSurfer pipeline (Dale, Fischl, & Sereno, 1999; Glasser et al., 2013; Smith et al., 2004). For task data, preprocessing of functional data included motion-correction, high-pass filtering (128 s cut-off), and resampling to the cortical surface. Cerebellar and subcortical structures were excluded. On the surface, the task data were smoothed with a 6 mm FWHM Gaussian kernel. Two runs for blind and three runs for sighted participants were dropped due to equipment failure.
The three conditions in the reading task and two conditions in the listening task were entered as covariates of interest into general linear models. Only the six-item period in each trial was entered into the model. Covariates of interest were convolved with a standard hemodynamic response function, with temporal derivatives included. Probe stimulus, response periods, and the trials in which participants failed to respond were entered as covariates of no interest. The effect of the mean signal of white matter and CSF, as well as the motion spike, were also included as the covariates of no interest. Runs were combined within subjects using fixed-effects models.
Data across participants were combined within groups using random-effects analysis. Reported whole-cortex contrasts were run thresholded at p < 0.01 vertexwise, and p < 0.05 cluster-corrected.
fMRI ROI analysis
Individual-subject functional regions of interest (ROIs) were defined within the vOTC and other regions previously implicated in Braille reading (V1), language (left inferior frontal cortex, IFC), and tactile perception (left posterior parietal cortex, PPC, and hand region of the left primary somatosensory-motor cortex, SMC). To construct the left vOTC search space, we first combined the left fusiform, inferior temporal, and lateral occipital parcels from Freesurfer’s automated aparc parcellation and then excluded V1, V2 regions, and the vertices with y-axis greater than −30 (Lerma-Usabiaga et al., 2018). To test the posterior-to-anterior function gradient, the left vOTC search space was divided to three portions: posterior (y < −64), middle (−48 > y > = −64), and anterior portion (y > = −48). The search space in the right hemisphere was created by flipping the left vOTC masks along the x-axis. The V1 search space was defined from a previously published anatomical surface-based atlas (PALS-B12; Van Essen, 2005). The left inferior frontal language (IFC) search space was defined by using a sentence vs. non-words contrast (Fedorenko, Hsieh, Nieto-Castañón, Whitfield-Gabrieli, & Kanwisher, 2010). The parietal search space was defined by the orthogonal contrast of all tactile conditions (words, consonant strings, and control) > rest in whole-cortex analysis, excluding the occipital parcels from Freesurfer’s automated aparc parcellation. To look for lateralization effects in vOTC across groups, we examined responses separately for the right and left hemispheres.
Individual-subject functional ROIs were defined in group-wise search spaces (described below). Each individual subject’s ROI was defined as the top 5% of vertices activated for the tactile/visual consonant strings > tactile/visual controls contrast within the search spaces listed above. We used this consonant string contrast for the primary analysis in order to focus on orthographic as opposed to semantic responses. However, all analyses were also repeated using the words > control contrast and results from these analyses are reported in the supplementary material (Figure S3 and Figure S5). To avoid using the same data to define ROIs and to test hypotheses, a leave-one-run-out cross-validation procedure was used. ROIs were defined based on data from all but one run, then the percent signal change (PSC) was extracted from the left-out run. This procedure was repeated iteratively across all runs and the PSC was averaged across iterations.
Repeated-measured ANOVAs were used to analyze the ROI data, and two-tailed paired t-tests were used for pairwise comparisons. All p values were Bonferroni-corrected for multiple comparisons.
Topographical preference map
To explore the posterior-to-anterior gradient in left vOTC and in a data-driven way, we mapped the topographical preference of the vOTC during reading using a winner-take-all approach. We took the bilateral vOTC as the mask, and color-coded each vertex within the mask according to which stimulus condition it responded most strongly. The topographical preference map of the PPC and parieto-occipital/dorsal occipital cortex was created using the same winner-take-all approach. The mask was defined by the orthogonal contrast of all tactile conditions (words, consonant strings, and control) > rest in the whole-cortex analysis.
Laterality index analysis
To determine whether spoken and written language co-lateralize to the same hemisphere, we performed laterality index (LI) analyses. LI was calculated separately for the reading and listening tasks for each participant in the SMC, PPC, vOTC, V1, IFC, and also for the whole cortex. For the reading task, LI was determined based on the tactile/visual words > rest contrast. For the listening task, LI was determined using the audio words > rest contrast. The LI was calculated using the standard formula: (L - R) / (L + R), where L and R refer to the sums of the z statistics from the relevant contrast within the left and right hemispheres, respectively. LI ranges from −1 to 1, with a score of 1 indicating strong left lateralization and −1 strong right lateralization.
The bootstrap/histogram method was used to ensure that LIs were not overly influenced by arbitrary activation threshold choices or outlier voxels. Bootstrapped LIs were computed using 20 evenly spaced thresholds ranging from z = 1.28 to z = 4.26 (corresponding to one-sided p = 0.1 to p = 0.00001, uncorrected). For every threshold, each participant’s z statistic map was masked to only include the voxels exceeding the threshold within the search space. Then we sampled the suprathreshold voxels 100 times with replacement in each hemisphere at a sampling ratio k = 1.0. The LIs were then calculated using each pair of left and right hemisphere samples, yielding a histogram of 10,000 threshold-specific LIs. Next, a single LI for each threshold was calculated by averaging the values after removing the upper and lower 25% of the 10,000 threshold-specific values. Finally, the LI reported for each participant represents the average across all thresholds.
A small number of participants were excluded from the LI analysis for a particular region if they did not have suprathreshold activation in both hemispheres (listening task-SMC: 2 sighted, 2 blind participants excluded; PPC: 1 sighted; V1: 6 sighted; IFC: 1 sighted; reading task-SMC: 4 sighted; PPC: 1 sighted; IFC: 1 sighted).
To examine the effect of spoken language lateralization and Braille reading handedness on the reading lateralization, a multiple regression was conducted for each region. The LI of spoken words in IFC and dominant reading hand were entered as regressors and the LI of written words was the dependent variable. Although some participants reported reading Braille bimanually, the participants were asked to read tactile stimuli during the experiment only with their dominant reading hand. There were 7 blind participants in the left Braille-reading handed group and 10 in the right Braille-reading handed group.
Results
Behavioral Results
Higher accuracy and shorter reaction times for word-like stimuli
Because the two groups differed in age, we regressed out the effect of age on accuracy and reaction times and performed analyses on the residuals (see Figure S1 in Supplementary materials, results from raw data are also included in Figure S1). In the reading task, there was a significant effect of age on accuracy (main effect of age, F (1, 85) = 5.681, p < 0.05). A two-way lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA performed on the residuals revealed higher accuracy on more word-like stimuli (words and consonant strings > control) in both blind and sighted groups (main effect of lexicality: F(2, 54) = 13.963, p < 0.001). There was no lexicality by group interaction (F(2, 54) = 0.872, p = 0.737). The group effect was marginal (sighted > blind, F(1, 27) = 3.603, p = 0.068). For the listening task, there was a trending effect of age on accuracy (F(1, 56) = 2.907, p = 0.094). A two-way lexicality (words, control) by group (sighted, blind) ANOVA on the residuals revealed a lexicality effect (words > control; F(1, 29) = 50.944, p < 0.001), no group effect (F(1, 27) = 0.843, p = 0.367) or group by lexicality interaction (F(1, 27) = 0.549, p = 0.465).
Likewise, for reaction times during the reading task, there was a significant effect of age (F(1, 85) = 39.089, p < 0.001). A two-way lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA on the residuals revealed a lexicality effect (words and consonant strings < control; F(2, 54) = 8.09, p < 0.001). There was no group effect (F(1, 27) = 8.09, p = 0.297). The group by lexicality interaction effect was marginal (F(2, 54) = 2.763, p = 0.072). Pairwise comparisons showed the shorter reaction times on more word-like stimuli in blind group, but there was no difference across stimuli in the sighted group (blind: words vs. control, t(16) = −2.91, p < 0.01; consonant strings vs. control, t(16) = −2.604, p < 0.01; words vs. consonant strings, t(16) = −0.686, p > 0.99; sighted: all pairwise comparisons p > 0.05; the p-values were Bonferroni-corrected).
During the listening task, the main effect of age on reaction time was significant (F (1, 85) = 15.892, p < 0.001). A two-way lexicality (words, control) by group (sighted, blind) ANOVA on the residuals revealed a lexicality effect (words < control; F(1, 29) = 50.944, p < 0.001). There was no group effect (F(1, 27) = 0.071, p = 0.792) or group by lexicality interaction (F(1, 29) < 0.001, p > 0.99).
fMRI Results
Visual (sighted) but not tactile Braille reading (blind) elicits a posterior-to-anterior functional gradient in left vOTC and shows left-lateralization
Two signatures of visual reading responses in vOTC are 1) a posterior-to-anterior word form gradient and 2) left-hemisphere lateralization. We asked whether Braille reading in blind individuals shows similar posterior-to-anterior and laterality effects as visual reading in sighted people. We divided the left and right vOTC each into the posterior, middle, and anterior subregions (ROIs) and compared responses in these subregions across hemispheres and groups (see Methods, Figure 1). We first conducted a four-way hemisphere (left, right) by posterior/anterior subregion (posterior, middle, anterior) by lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA to examine reading responses across groups. This ANOVA revealed a four-way interaction (F (4, 128) = 3.028, p < 0.05), indicating that lexicality, hemisphere, and posterior/anterior subregion interact with group. Next, we used separate ANOVAs for each group to unpack the 4-way interaction. Because of the large number of factors and to preserve readability, we report only hypothesis-relevant effects in this section. A complete summary of all effects can be found in the Supplemental Materials.
For the sighted group, we found the expected three-way interaction between hemisphere (left, right), posterior/anterior subregion (posterior, middle, anterior) and lexicality (words, consonant strings, control; F (4, 56) = 4.287, p < 0.01). Next, we looked at each hemisphere separately in the sighted group.
In the left vOTC, there was a two-way interaction between lexicality (words, consonant strings, control) and posterior/anterior subregion (posterior, middle, anterior; F (4, 56) = 9.69, p < 0.001), reflecting the expected posterior-to-anterior functional gradient. Pairwise comparisons revealed that the posterior vOTC responded similarly to all visual stimuli (all pairwise comparisons p > 0.05). By contrast, in middle vOTC, consonant strings elicited higher responses than both words and control stimuli (Bonferroni-corrected paired t-test for words vs. consonant strings: t(14) = −3.918, p < 0.05; consonant strings vs. control: t(14) = 4.106, p < 0.01). In anterior vOTC, responses to words and consonant strings were both higher than control and not different from each other (Bonferroni-corrected paired t-test for words vs. control: t(14) = 3.461, p < 0.05; consonant strings vs. control: t(14) = 3.327, p < 0.05, all other pairwise comparisons p > 0.05).
In the right vOTC of the sighted group, a two-way lexicality (words, consonant strings, control) by posterior/anterior subregion (posterior, middle, anterior) ANOVA revealed no main effect of lexicality (F (2, 28) = 0.448, p > 0.05) and no interaction (F (4, 56) = 0.987, p > 0.05). To summarize, these results demonstrate that in the sighted group, there was a posterior-to-anterior functional gradient for processing word form during reading in the left but not right vOTC.
Next, we examined these effects in the blind group. We conducted a three-way hemisphere (left, right) by posterior/anterior subregion (posterior, middle, anterior) by lexicality (words, consonant strings, control) ANOVA. Unlike in the sighted, there was no significant three-way interaction (F (4, 56) = 0.877, p = 0.482). Although there was no interaction, we conducted a separate ANOVA testing for a lexicality effect across the posterior/anterior subregions for each hemisphere separately in order to match the analysis of the sighted group.
In the left vOTC of the blind group, all three (posterior, middle, anterior) subregions responded most to words, followed by consonant strings followed by tactile shapes (Figure 1). There was a two-way interaction between lexicality (words, consonant strings, control) and posterior/anterior subregion (posterior, middle, anterior; F (4, 72) = 3.198, p < 0.05). However, the nature of this interaction was different from that observed in the sighted group. All pairwise-comparisons between conditions were significant in all three subregions (words > consonant strings > control), except the difference between words and consonant strings did not reach significance in the anterior vOTC (Bonferroni-corrected paired t-test for words vs. consonant strings: posterior vOTC t(18) = 2.678, p < 0.05; middle vOTC: t(18) = 3.166, p < 0.05; anterior vOTC: t(18) = 2.016, p = 0.177; words vs. control: posterior vOTC: t(18) = 5.463, p < 0.001; middle vOTC t(18) = 8.547, p < 0.001; anterior vOTC: t(18) = 5.874, p < 0.001; consonant strings vs. control: posterior vOTC: t(18) = 3.413, p < 0.01; middle vOTC t(18) = 4.696, p < 0.01; anterior vOTC: t(18) = 5.034, p < 0.001).
Unlike in the sighted group, in the right hemisphere of the blind group, lexicality effects were similar to the left hemisphere. All three (posterior, middle, anterior) subregions responded most to words, followed by consonant strings followed by tactile shapes. There was also a two-way interaction between lexicality (words, consonant strings, control) and subregion (posterior, middle, anterior; F (4, 72) = 7.064, p < 0.001). Pairwise comparisons showed that the posterior right vOTC responded more to words than control (t(18) = 4.112, p < 0.01); the middle vOTC responded more to words than both consonant strings (t(18) = 4.011, p < 0.01) and control (t(18) = 4.819, p < 0.001); and the anterior vOTC responded most strongly to words and consonant strings than control stimuli (words vs. consonant strings: t(18) = 2.429, p = 0.07; words vs. control: t(18) = 5.561, p < 0.001; consonant strings vs. control, t(18) = 4.522, p < 0.01). Other pairwise comparisons did not reach significance (posterior vOTC: words vs. consonant strings, t(18) = 2.349, p = 0.091; consonant strings vs. control, t(18) = 2.16, p = 0.134; middle vOTC: consonant strings vs control, t(18) = 2.073; p = 0.159).
In summary, in the blind group, the entire posterior/anterior extent of the vOTC responded more to words than either consonant strings or tactile shapes. Unlike in the sighted, we did not observe the posterior-to-anterior functional gradient or a left hemisphere dominance for written words.
For the listening task, similar to the reading task, we conducted a four-way hemisphere (left, right) by subregion (posterior, middle, anterior) by lexicality (words, control) by group (sighted, blind) ANOVA. The four-way interaction effect with group was marginal and we, therefore, did not proceed to further analyses (F (2, 64) = 2.717, p = 0.074). It is worth noting that in sighted group, responses to auditory stimuli were below rest in posterior vOTC and above rest in the more anterior regions. This pattern was not observed in the blind group (see Figure S2).
Topographical preference map of vOTC: gradient only in sighted readers
In order to explore the posterior-to-anterior gradient in a data-driven way, we mapped the topographical preferences of the blind and sighted vOTC during reading using a winner-take-all approach (Figure 1B). We coded the vertex-wise preferences in different colors for words, consonant strings, and control stimuli (see Methods). In the sighted group, a clear posterior-to-anterior gradient in the left vOTC was observed. The posterior section shows a preference for the visual control false font stimuli whereas anteriorly, most vertices preferred consonant strings or words. In the sighted group’s right vOTC, almost all vertices responded most strongly to the control stimuli. These patterns contrast starkly with the blind vOTC maps, which show a clear, bilateral preference for tactile words throughout the entire extent of both left and right vOTC.
The posterior parietal cortex (PPC) but not S1 of blind readers shows a preference for written Braille words and consonant strings
We tested the hypothesis that the PPC shows preferential involvement in Braille reading, analogous to vOTC preference for visual print in the sighted group (Figure 2A). A two-way lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA in the reading task showed a main effect of lexicality (F (2, 64) = 13.206, p < 0.001) and a group by lexicality interaction (F (2, 64) = 5.123, p < 0.01; functional ROIs were defined using the words > controls contrast). There was no main effect of group (F (1, 32) = 1.452, p = 0.237). In the sighted group, consonant strings elicited higher responses than both words and control stimuli (Bonferroni-corrected paired t-test, words vs. consonant strings: t(14) = −3.805, p < 0.01; consonant strings vs. control: t(14) = 6.922, p < 0.001; words vs. control: t(14) = 1.406, p > 0.99). By contrast, in the blind group the PPC responded more to both tactile words and consonant strings relative to control stimuli (Bonferroni-corrected paired t-test, words vs. consonant strings: t(18) = 1.571, p = 0.298; consonant strings vs. control: t(18) = 3.028, p < 0.01; words vs. control: t(18) = 3.165, p < 0.01). Note that when the posterior parietal ROI was defined instead using the words > controls contrast, the blind group continued to show a larger lexicality preference than the sighted (see Supplemental Materials for details; Figure S5). These results suggest a specific involvement of the PPC in tactile Braille reading.
For the listening task, the two-way lexicality (words, control) by group (sighted, blind) ANOVA revealed a significant main effect of lexicality in the PPC (words > control, F (1, 32) = 11.112, p < 0.01; see Figure S4). There was no main effect of group (F (1, 32) = 3.275, p = 0.08) and no interaction between group and lexicality (F (1, 32) = 2.372, p = 0.133).
We examined responses of the left SMC hand region to test whether it showed a similar preference for Braille words and consonant strings as the PPC (Figure 2A). For the reading task, the two-way lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA showed a main effect of lexicality (F (2, 64) = 7.265, p < 0.001; functional ROIs were defined using the words > controls contrast), with higher responses to the consonant strings than control stimuli. Noting that the responses to all stimuli were below rest in SMC in blind group. There was no main effect of group (F (1, 32) = 0.604, p = 0.443) and no group by condition interaction (F(2, 64) = 1.501, p = 0.231). For the listening task, the two-way lexicality (words, control) by group (sighted, blind) ANOVA revealed a main effect of group (F (1, 32) = 15.622, p < 0.001), with overall greater responses in sighted group than blind group. There was no main effect of lexicality (F (1, 32) = 1.933, p = 0.174) and no interaction (F (1, 32) = 0.658, p = 0.423). Results were similar when the SMC ROIs were instead defined using the words > controls contrast. In sum, unlike in the PPC, we found no evidence for specialization of SMC for Braille reading as compared to perception of control tactile shapes.
Topographical preference map of parieto-occipital stream: shift in preference from shapes to word-like Braille stimuli along anterior-to-posterior axis
Finally, we constructed a data-driven preference map in PPC and parieto-occipital/dorsal occipital cortex analogous to the one created for vOTC (see Figure 2B). In the blind group, this map shows preferential responses to tactile shapes in anterior portions of PPC, immediately adjacent to S1. A small middle region in left and right PPC showed a preference for consonant strings, whereas the most posterior portion of PPC, as well as parieto-occipital and dorsal occipital regions responded preferentially to words. To summarize, the overall pattern suggests an anterior-to-posterior decoding pattern in the parieto-occipital stream in the blind group, analogous to the posterior-to-anterior vOTC gradient observed in sighted readers.
Left vOTC responds to linguistic stimuli in blind and sighted readers, but differently to words and consonant strings across groups
We examined the effects of lexicality across groups on left vOTC responses during the reading tasks using a two-way lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA (functional ROIs were defined using the words > controls contrast, Figure 3). We observed a main effect of lexicality (F (2, 64) = 42.293, p < 0.001) and no main effect of group (F (1, 32) = 0.004, p = 0.948). A lexicality by group interaction revealed different response patterns across sighted and blind individuals (F (2, 64) = 10.272, p < 0.001). While in the blind group words elicited larger responses than consonant strings, responses to consonant strings were numerically but not significantly larger than to words in the sighted group (Bonferroni-corrected paired t-test words vs. consonant strings blind: t(18) = 3.027, p < 0.05; sighted: t(18) = −1.317, p = 0.614). In both groups, words and consonant strings showed larger responses than control stimuli (all pairwise comparisons p < 0.05) (Figure 3).
For the listening task, a two-way lexicality (words, control) by group (sighted, blind) ANOVA revealed greater overall responses to words than control stimuli (main effect of lexicality, F (1, 32) = 35.919, p < 0.001; functional ROIs were defined using the words > controls contrast). There was no main effect of group (F (1, 32) = 1.362, p = 0.252). The lexicality by group interaction was marginal (F (1, 32) = 3.785, p = 0.061), indicating a larger difference between audio words and audio control stimuli in the blind group than in the sighted group. A similar pattern was observed when the vOTC functional ROI was instead defined using the words > control contrast (see Supplemental Materials; Figure S3).
The left inferior frontal cortex (IFC) prefers word-like written and spoken stimuli across blind and sighted readers
We analyzed responses in the left IFC across groups with the prediction that this high-level language region would show similar response patterns across blind and sighted readers.
Consistent with this prediction, responses were similar across groups for both tasks in the left IFC. For the reading task, a two-way lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA revealed a significant main effect of lexicality, with larger responses for words and consonant strings over the control condition (F (2, 64) = 46.313, p < 0.001; Figure 3; functional ROIs were defined using the words > controls contrast). Neither the main effect of group (F (1, 32) = 0.004, p = 0.947) nor the interaction (F (2, 64) = 1.017, p = 0.367) were significant. Likewise, for the listening task, the two-way lexicality (words, control) by group (sighted, blind) ANOVA revealed the expected main effect of lexicality (words > control; F (1, 32) = 23.778, p < 0.001). There was no main effect of group (F (1, 32) = 0.753, p = 0.392) and no lexicality by group interaction (F (1, 32) = 0.357, p = 0.554). There was also no group-by-condition interaction when functional ROIs were defined using the words > controls contrast. Both groups still showed a preference for words over control stimuli and in this case, there was also a larger response to words over consonant strings in both groups (see Supplemental Materials for details; Figure S5). These results are consistent with prior studies showing similar responses to spoken and written language in the left inferior frontal cortex of blind and sighted adults.
V1 shows a preference for words in blind readers
We investigated the effects of lexicality across groups in V1 (Figure 3), because it was previously identified as relevant to Braille reading (Sadato et al., 1996; Cohen et al. 1997). As with vOTC, we first examined responses in left V1 during the reading task using the consonant strings > control functional ROIs. A two-way lexicality (words, consonant strings, control) by group (sighted, blind) ANOVA revealed main effects of lexicality (F (2, 64) = 4.247, p < 0.05) and group (sighted > blind, F (1, 32) = 6.964, p < 0.05). There was also a significant lexicality by group interaction (F (2, 64) = 9.487, p < 0.001). In the blind group, V1 responded most to words and there was no difference between consonant strings and control (Bonferroni-corrected paired t-test, words vs. consonant strings: t(18) = 2.641, p < 0.05; words vs. control: t(18) = 3.691, p < 0.01; consonant strings vs. control: t(18) = 2.367, p = 0.214). In the sighted group, V1 responded more to control stimuli than consonant strings (Bonferroni-corrected paired t-test, t(14) = 2.652, p < 0.01). There was no difference between other conditions (pairwise comparisons p > 0.05.) V1 responses in the blind group were similar when functional ROIs were defined using words > control (see Supplemental Materials for details; Figure S5). In the sighted group, however, a marginal preference for words over false fonts emerged in this alternative analysis (Bonferroni-corrected paired t-test, t(14) = 2.573, p = 0.067; Figure S5). This latter result is consistent with some previous studies showing that V1/V2 responded more to words than non-letter control stimuli like scrambled words (Szwed et al., 2011; Szwed, Qiao, Jobert, Dehaene, & Cohen, 2014).
For the listening task, the two-way lexicality (words, control) by group (sighted, blind) ANOVA showed a main effect of group (F (1, 32) = 16.067, p < 0.001), with overall greater activation seen in blind than sighted V1. There was no main effect of lexicality (F (1, 32) = 2.344, p = 0.316) and no interaction between the factors (F (1, 32) = 1.589, p = 0.217). Notably in the sighted but not blind group, responses to both words and audio control were below rest. This pattern of results was the same in words > control ROI (See Supplemental Materials for details; Figure S5).
Lateralization of Braille correlates with spoken language lateralization and Braille-reading hand
We used a lateralization index (LI) analysis to investigate the lateralization of spoken and written language across blind and sighted readers. First, we computed LIs separately for written (tactile/visual words > rest) and spoken (audio words > rest) language in the SMC, PPC, vOTC, V1, IFC and whole cortex in sighted and blind groups. On average, the blind group showed no systematic lateralization for written or spoken words in any region (one-sample t tests of LI = 0, reading: SMC: t(18) = 0.167, p = 0.869; PPC: t(18) = −1.257, p = 0.225; vOTC: t(18) = 0.799, p = 0.435; V1: t(18) = 0.735, p = 0.472; IFC: t(18) = −0.054, p = 0.958; whole cortex: t(18) = −0.166, p = 0.87; listening: SMC: t(13) = −1.332, p = 0.206; PPC: t(18) = 0.051, p = 0.96; vOTC: t(18) = 0.322, p = 0.751; V1: t(18) = −0.506, p = 0.619; IFC: t(18) = −1.135, p = 0.271; whole cortex: t(18) = 0.395, p = 0.697). For the sighted group, we found left-lateralized activation in vOTC, IFC and whole cortex for written words (one-sample t tests of LI = 0, vOTC: t(14) = 5.31, p < 0.001; IFC: t(13) = 5.776, p < 0.001; whole cortex: t(14) = 5.748, p < 0.001). The sighted group’s SMC, PPC and V1 activity was not systematically lateralized for written words (one-sample t tests of LI = 0, SMC: t(10) = 1.172, p = 0.268; PPC: t(13) = 0.404, p = 0.692; V1: t(14) = 1.614, p = 0.129). For spoken words, the sighted group’s vOTC and IFC activity was left lateralized (one-sample t tests of LI = 0, vOTC: t(14) = 3.42, p < 0.01; IFC: t(13) = 3.767, p < 0.01). We found right-lateralized activation in PPC and V1 for spoken words in the sighted group (one-sample t tests of LI = 0, PPC: t(13) = − 3.161, p < 0.01; V1: t(8) = −3.872, p < 0.01). There were no systematic lateralization in SMC and whole cortex for the listening task (one-sample t tests of LI = 0, SMC: t(13) = −0.848, p = 0.412; whole cortex: t(14) = 1.449, p = 0.169). To summarize, we found left-lateralized activity in vOTC and IFC for written and spoken words in the sighted group. By contrast, the blind group did not show systematic lateralization in any of the regions or the whole cortex for written or spoken words. Among blind participants there was substantial variability in lateralization, with some participants showing strong left and others strong right lateralization, consistent with previous studies of lateralization of spoken language in this population (Figure 4, see also Lane et al., 2017 and Roder et al., 2002).
Next, we determined if lateralization of the Braille reading network could be predicted by the laterality of spoken language and Braille reading hand across blind individuals. A multiple regression analysis was conducted in each region, with the LI of spoken words in IFC and dominant reading hand entered as the regressors and the LI of written words as the dependent variable. First, both the dominant reading hand and the LI of spoken words in IFC predicted the LI of written words in PPC, vOTC and whole cortex (PPC: dominant reading hand: β = 0.55, p < 0.001; LI of spoken words in IFC: β = 0.55, p < 0.001; adjust r2 = 0.843; vOTC: dominant reading hand: β = 0.468, p < 0.01; LI of spoken words in IFC: β = 0.611, p = 0.001; adjust r2 = 0.727; whole cortex: dominant reading hand: β = 0.399, p = 0.001; LI of spoken words in IFC: β = 0.534, p < 0.001; adjust r2 = 0.761). Second, in V1 and the IFC, only the LI of spoken words predicted the LI of written words (V1: dominant reading hand: β = 0.258, p = 0.144; LI of spoken words in IFC: β = 0.734, p = 0.001; adjust r2 = 0.575; IFC: dominant reading hand: β = −0.112, p = 0.359; LI of spoken words in IFC: β = 0.814, p < 0.001; adjust r2 = 0.702). Last, we found in SMC, only the dominant reading hand predicted the LI of written words (dominant reading hand: β = 1.624, p < 0.001; LI of spoken words in IFC: β = 0.311, p = 0.261; adjust r2 = 0.771). To summarize, in blind individuals, responses to Braille written words and spoken words were co-lateralized to the same hemisphere across most of the Braille reading network, including the vOTC, V1, PPC, and the IFC. Braille reading hand also had an effect on the lateralization of Braille written words in vOTC, PPC, and SMC.
In the sighted group, we did not find the co-lateralization of spoken and written language to the same hemisphere. The correlation between the LI of spoken words in IFC and the LI of written words in vOTC was not significant (r = −0.233, p = 0.423). In addition, there were no correlations between the LI of spoken words in IFC and the LI of written words in V1, SMC or whole cortex (V1: r = −0.301, p = 0.296; SMC: r = 0.169, p = 0.62; whole cortex: r = 0.12, p = 0.683). However, the LI of spoken words in IFC was positively correlated with the LI of written words in PPC and IFC (PPC: r = 0.55, p < 0.05; IFC: r = 0.732, p < 0.01).
Asterisks (*) on the bar denote significant difference from 0; asterisks (*) between two bars denote significant difference between the LI of left-handed blind readers and the LI of right-handed blind readers (p <0.05); Cross (†) on the bar denotes marginal difference from 0 (0.05< p <0.1); cross(†) between two bars denotes marginal difference between the LI of left-handed blind readers and the LI of right-handed blind readers (0.05< p <0.1).
Whole cortex analyses
Tactile Braille (blind) and visual print (sighted) reading activated both common and distinctive cortical areas across groups. For reading as compared to rest, both sighted (visual words) and blind (Braille words) readers activated the bilateral vOTC (blind peak: −41, −57, −13; sighted peak: −41, −58, −12), including the location of the classic VWFA (peak: −46, −53, −20), as well as early visual cortices, specifically the foveal confluence (V1/V2/V3) (Figure 5). Within vOTC, responses in the blind group extended further medially and anteriorly and were more extensive in the right hemisphere, relative to the sighted group. The vOTC activation in the blind group also extended further laterally and superiorly, into lateral occipital, occipitotemporal, and inferior temporal cortex. Both groups also activated posterior prefrontal cortices (inferior frontal gyrus and middle frontal gyrus). Notably, visual cortex responses (e.g., V1) are likely driven at least in part, by different processes across groups, since the sighed group is performing a visual task, whereas the blind group is performing a tactile task.
In the blind but not sighted group, reading relative to rest produced extensive activation in bilateral posterior parietal cortices, including superior parietal lobule and supramarginal gyrus (SMG). This parietal activation was posterior to early sensory-motor hand representations. The sighted group activated only a small cluster in parietal cortex, in the left superior parietal lobule. The blind, but not sighted group, also activated parieto-occipital and dorsal occipital regions (middle occipital gyrus). The sighted group additionally activated a lateral temporal region that was not observed in the blind group. Finally, whereas responses to written words were left-lateralized in the sighted group, they were bilateral in the blind group.
Listening to words (audio words > rest) likewise revealed partially overlapping responses across groups. In the blind group only, listening to words activated the bilateral vOTC (peak: −42, −44, −16), including the location of the classic VWFA, and early ‘visual’ cortices. Both groups activated classic fronto-temporal language regions in inferior and lateral prefrontal as well as lateral temporal cortices (Figure 5). Responses in frontal regions were left-lateralized in the sighted group and bilateral in the blind group. The sighted but not blind group activated the left sensorimotor cortex/postcentral gyrus.
Reading as compared to hearing words (tactile/visual words > audio words) also revealed similarities and differences across groups (Figure 5). For the sighted group (visual words > audio words), reading words induced greater activation in bilateral vOTC, including the typical location of the VWFA and regions posterior to it, as well as bilateral early visual cortices. Like the sighted, the blind group also activated a region in the left vOTC (fusiform; peak: −27, −61, −14), but this activation was medial to the typical VWFA location. A cluster of activity was also observed lateral to the typical VWFA location in the blind group, in the inferior temporal/lateral occipital cortex (peak: −45 −67 −6). Outside vOTC, a cluster of activity was also observed in the blind group in left foveal early ‘visual’ cortices. The blind but not sighted group also showed extensive activation in posterior parietal cortices, including the SMG and superior parietal lobule. Blind readers also activated dorsal occipital/parieto-occipital cortices during reading. The blind group additionally activated the bilateral superior frontal gyrus and right precentral gyrus. In the sighted group, a small cluster was observed in the left superior parietal lobule.
In sum, we observed the following pattern. First, although both groups activated vOTC during reading, the peak location, distribution and functional profile of responses in vOTC were distinct across groups. Only the blind group showed robust vOTC responses during spoken word comprehension. Second, in contrast to the sighted group, the blind group activated extensive posterior parietal, parieto-occipital, and dorsal occipital areas during (Braille) reading. This parieto-occipital stream was not engaged by spoken word comprehension in blind readers.
Discussion
Consistent with previous studies, we find that reading activates partially overlapping networks across blind readers of tactile Braille and sighted readers of visual print. In particular, we observed similar responses to written and spoken words and letters in the left IFC of sighted and blind people. We also observed partially overlapping responses in the vOTC across groups. In agreement with past findings, the highest peak of activation for Braille reading relative to rest and visual reading relative to rest was near the canonical ‘VWFA’ location (Braille words > rest: −41, −57, −13; sighted visual words > rest: −41, −58, −12) (Cohen et al., 2000; Dzięgiel-Fivet et al., 2021; Kim et al., 2017; Rączy et al., 2019; Reich et al., 2011). However, we also observed key differences in the neural bases of Braille and visual print reading, in vOTC, V1, and posterior parietal cortices, as well as in lateralization patterns.
vOTC of sighted but not blind readers contains a hierarchical word form gradient
Consistent with past research, in sighted readers, we observed a posterior-to-anterior functional gradient only in the left vOTC. The posterior portion of the left vOTC responded equally to all visual stimuli in the ROI analysis, the middle portion showed a preference for consonant strings, while the most anterior portion responded more to words and consonant strings than to false fonts. Preferential responses to consonant strings in the middle vOTC of sighted readers are consistent with prior literature showing stronger activation to non-word stimuli in the VWFA when longer presentation times are used or more attention is required (Bruno, Zumberge, Manis, Lu, & Goldman, 2008; Cohen, Dehaene, Vinckier, Jobert, & Montavont, 2008; Dehaene & Cohen, 2011; Ludersdorfer, Schurz, Richlan, Kronbichler, & Wimmer, 2013). A winner-take-all map revealed a similar pattern as the ROI analysis and further showed larger responses to false fonts than consonant strings or words in posterior portions of left vOTC. The larger responses to false fonts in posterior vOTC likely reflect greater attention to less familiar visual stimuli, as indicated by slower reaction times and poorer accuracy. This pattern is consistent with prior studies with sighted readers (Ludersdorfer et al., 2013; Wang, Yang, Shu, & Zevin, 2011).In addition, in sighted readers, the posterior vOTC showed a modality-specific response: above rest activity during visual reading and deactivation during listening, while the most anterior aspect of left vOTC responded equally to visual and auditory stimuli. These results are consistent with the view that in sighted readers the middle and anterior portions of left lateral vOTC become specialized for recognition of letters and words, constituting the so-called ‘VWFA.’
By contrast, in the blind group, we found no evidence for left-lateralization, no evidence for a posterior-to-anterior functional gradient, or posterior/anterior change in modality preference. In blind readers, the entire posterior/anterior extent of bilateral vOTC showed a preference for Braille words over consonant strings and tactile shapes during reading and a larger response to spoken words than backward speech. Unlike in the sighted group, in the blind group, no portion of the vOTC showed a consonant string preference over words and shapes, whereas in the sighted group, the middle vOTC responded more to consonant strings than words. In addition, there was no change in preference for written as opposed to spoken words along the posterior/anterior extent of vOTC.
The Whole-cortex analysis also revealed differences in lateral/medial organization of vOTC across groups. As noted above and previously documented, when Braille and spoken words were compared to rest, a peak of activation was observed in the classic VWFA region along the medial/lateral axis, although in blind readers, additional activity was also observed throughout much of vOTC. By contrast, when Braille words were compared to spoken words in the blind group, peak activity in the vOTC was medial to the classic VWFA location (peak: −27, −61, −14). We did not observe such a medial peak for the same contrast in sighted readers. This medial vOTC region has previously been shown to be functionally connected to dorsal parietal cortices, which are involved in spatial attention and effortful letter-by-letter reading in sighted people (Bouhali, Bézagu, Dehaene, & Cohen, 2019; Cohen et al., 2008; Corbetta & Shulman, 2002; Henry et al., 2005; Saalmann, Pigarev, & Vidyasagar, 2007). As discussed in detail below, the PPC appears to play an important role in Braille reading and may send information to medial vOTC in blind readers.
Although the precise role of vOTC in Braille reading remains to be determined, the present evidence suggests that although the vOTC is involved in both tactile reading and visual reading, the anatomical distribution of responses within vOTC, the functional profile, and therefore likely the cognitive contribution differs. In sighted readers, information reaches lateral vOTC from early visual areas and is sent onward to fronto-temporal language regions, as well as receiving top-down input from the language regions (Bouhali et al., 2014; Hannagan et al., 2015; Saygin et al., 2016; Stevens et al., 2017b; Yeatman et al., 2013). Lateral vOTC thus contributes to decoding linguistic information (phonological, semantic, and grammatical) from visual word forms (Dehaene & Cohen, 2011; Price & Devlin, 2011). By contrast, we hypothesize that in blind readers of Braille, the classic VWFA location in lateral vOTC receives linguistic (i.e., semantic, grammatical) information from fronto-temporal language circuits and serves as one of the entry points for language into posterior ‘visual’ circuits. This hypothesis is supported by prior studies showing that in blind but not sighted people, the classic VWFA location is sensitive to syntactic complexity of spoken sentences and shows enhanced responses to spoken language (Burton, Snyder, Diamond, et al., 2002; Dzięgiel-Fivet et al., 2021; Kim et al., 2017; Lane et al., 2015). At the same time, the current data and prior evidence suggest that other parts of the ‘visual’ cortex, including a medial portion of vOTC, may play a role in Braille reading. An intriguing albeit speculative possibility is that medial vOTC receives Braille-relevant input from PPC. Lacking connectivity data, the present study cannot test this hypothesis directly. One way to test this possibility in future work would be to use online TMS in combination with fMRI to disrupt information flow to the vOTC in blind readers of Braille by stimulating parietal cortices.
Parieto-occipital decoding stream in blind readers of Braille
We observed more extensive and different involvement of posterior parietal/parieto-occipital cortices in Braille as opposed to visual print reading. Large segments of PPC were activated during Braille reading relative to rest and spoken word comprehension. PPC activity in the blind group extended inferiorly and anteriorly, into regions adjacent to and immediately posterior to S1, including the supramarginal gyrus (SMG) and much of the superior parietal lobule. Notably, the hand regions of S1 itself did not show robust responses during Braille reading or preferential responses to Braille letters or words, consistent with prior studies (Burton, Snyder, Conturo, et al., 2002; Kupers et al., 2007). Additionally, in the blind group only, parietal activation extended posteriorly into parieto-occipital and dorsal occipital regions adjacent to parietal cortices and ultimately into the foveal confluence. By contrast, visual print reading (relative to false fonts) by sighted readers activated only a small region within the superior parietal lobule, consistent with prior studies (Cohen et al., 2008; Martin, Schurz, Kronbichler, & Richlan, 2015; Reilhac, Peyrin, Démonet, & Valdois, 2013).
The cognitive role of the wider parietal network in Braille reading is not known. The PPC has strong connectivity with S1 and contains high-level tactile areas, as well as multimodal representations of texture and shape (Bauer et al., 2015; Hegner et al., 2010; Kaas, 2012). Some of the activation we observed likely reflects processes related to recognition of tactile patterns that constitute Braille but are not specific to Braille letters or words (Boven, Hamilton, Kauffman, Keenan, & Pascual-Leone, 2000; Wong, Gnanakumaran, & Goldreich, 2011), akin to general responses to shapes, including false fonts, observed in vOTC of sighted readers (Grant, Thiagarajah, & Sathian, 2000; Sathian & Stilla, 2010; Stilla et al., 2008). Consistent with this possibility, much of the PPC, particularly its anterior portion, was more responsive to the more tactilely complex and unfamiliar dot shapes than to Braille letters or words. Again, this paralleled preferential responses to false fonts in posterior vOTC of sighted readers. Importantly, however, within the larger swath of PPC activation, ROI analyses revealed word and letter preferring subregions in the blind group, suggesting a specific involvement in Braille processing.
Word-specific activation in parieto-occipital areas extended posteriorly into dorsal occipital cortices, only in the blind group. Unlike anterior portions of the PPC, parieto-occipital and dorsal occipital areas showed larger responses to Braille words than Braille consonants or control shapes. However, like anterior PPC, parieto-occipital and dorsal occipital regions responded more to Braille words than to spoken words. This pattern suggests that parieto-occipital and dorsal occipital areas are involved in reading-specific processing, rather than language comprehension or tactile pattern recognition.
The winner-take-all map of PPC showed that the preference for words is located in the posterior aspect of the PPC, adjacent to parieto-occipital and dorsal occipital areas. Interestingly, in the blind group only, this map also revealed consonant preferring regions in an anatomically intermediate position between shape preferring areas in anterior portions of PPC and word preferring areas in parieto-occipital and dorsal occipital cortices. These regions did not emerge in corrected whole-brain analyses and therefore should be interpreted with caution, requiring investigation in future studies. However, the overall pattern suggests an anterior-to-posterior parieto-occipital reading stream, analogous to the posterior-to-anterior vOTC gradient observed in sighted readers. Within this gradient, parietal regions closer to S1, in anterior PPC represent shape/texture information relevant to Braille, with posterior PPC and parieto-occipital regions representing Braille orthography and still more posterior occipital areas representing linguistic information.
As noted in the Introduction, involvement of the PPC in Braille reading is predicted by connectivity-based theories of brain function (Bedny, 2017; Hannagan et al., 2015; Mahon & Caramazza, 2011; Saygin et al., 2016). One hypothesis, therefore, is that the PPC, along with adjacent parieto-occipital areas, plays an analogous role in Braille orthographic processing to the role of the vOTC in orthographic processing of visual print: conversion of tactile patterns to orthographic representations (Dehaene & Cohen, 2011; Dehaene et al., 2005).
Further work is needed to uncover the precise cognitive contribution of PPC and parieto- occipital cortices to Braille reading. In sighted readers, the PPC also contributes to reading but under different circumstances. The PPC is thought to participate in grapheme to phoneme conversion, letter position decoding, as well as working memory processes and shows more robust activity when effortful letter-by-letter reading is required (e.g., when words are degraded) (Carreiras, Quiñones, Hernández-Cabrera, & Duñabeitia, 2015; Cohen et al., 2004; Costanzo, Menghini, Caltagirone, Oliveri, & Vicari, 2012; Dehaene-Lambertz, Monzalvo, & Dehaene, 2018; Henry et al., 2005; Jonides et al., 1998; Koenigs, Barbey, Postle, & Grafman, 2009; Ossmy, Ben-Shachar, & Mukamel, 2014; Taylor, Rastle, & Davis, 2013). Parietal cortex also shows sensitivity to phonological rather than orthographic information during visual reading, in contrast to the VWFA (Booth et al., 2003; Bouhali et al., 2019). In future studies, it would be interesting to separate parietal responses to phonological as opposed to word-form information in blind readers of Braille. In addition, further research is needed to explore the anatomical layout of Braille-responsive parietal areas. For example, whether the parieto-occipital stream contains punctate regions analogous to the VWFA, or more distributed responses to Braille letters and words remains an open question. Likewise, in future studies, it will be important to test the precise role of the PPC in Braille reading and to dissociate the functions of PPC, parieto-occipital, and dorsal occipital regions.
Differential role of early visual cortex in Braille and visual print reading
We observed responses to reading in V1 in the blind but not sighted group. Like dorsal occipital areas, V1 showed a preference for words over consonant strings and control shapes. The involvement of V1 in Braille reading is consistent with previous studies (Cohen et al., 1997; Kupers et al., 2007; Sadato et al., 1996). We further found that, in whole-cortex results, a portion of V1 (foveal aspect of left V1) responded more to Braille reading than auditory word comprehension, whereas other portions of V1 (right hemisphere, and peripheral) did not show such a preference. This evidence is consistent with prior work suggesting that V1 does not have a single, homogeneous function in people who are blind but rather contains multiple anatomically separable functional subdivisions (Amedi et al., 2003; Bedny et al., 2011; Burton, Diamond, & McDermott, 2003; Burton, Snyder, Diamond, et al., 2002; Kanjlia et al., 2021; Kanjlia, Pant, & Bedny, 2019; Lane et al., 2015; Noppeney et al., 2003). Likewise, V1 may contain anatomically separable Braille-specific and high-level language responses in blind readers of Braille.
Lateralization of Braille reading: effects of spoken language lateralization and reading hand
With the exception of the primary somatosensory cortex, laterality of responses to written words in the entire reading network (vOTC, PPC, V1, and IFC) is predicted by the laterality of spoken word comprehension across blind individuals. On average congenitally blind individuals showed reduced left-lateralization of responses to spoken and written words (see also Lane et al., 2017). Those blind individuals who show right-lateralized responses to spoken words also show right-lateralized responses to written words. Previous studies with sighted readers with right hemisphere spoken language responses have likewise observed co-lateralization of spoken and written language (Cai et al., 2010; Van der Haegen et al., 2012). We did not observe this pattern in the current sighted sample, possibly because all sighted participants in the current study had strongly left-lateralized responses to spoken language and thus there was little interindividual variability. Together, these data suggest that written and spoken language tend to co-lateralize in blind and sighted readers alike. This observation is consistent with the hypothesis that strong connectivity to spoken language networks is one of the determining factors of which regions become ‘recycled’ for reading.
We also found a significant effect of reading hand on the lateralization of Braille reading that was independent of the effect of spoken language lateralization. That is, right-hand Braille readers showed more left-lateralized activation whereas left-hand Braille readers showed a bilateral response to Braille. In contrast to the effect of spoken language on laterality, the effect of reading hand was strongest in the primary somatosensory cortex, persisted in PPC and vOTC, and was absent in IFC and V1. This observation is consistent with the idea that V1 occupies the top of a processing hierarchy for people who are blind (Buechel, 2003). Effects of reading hand thus persist past S1 but wane at higher stages of processing, whereas effects of language lateralization are most prominent at higher processing stages and disappear in early sensory areas (i.e., S1).
In sum, the lateralization of Braille reading is jointly determined by the lateralization of spoken language and the input hand that receives the initial Braille stimulus. Although specific lateralization patterns differ across sighted and blind groups, an analogous connectivity principle appears to govern lateralization of reading in sighted and blind readers: lateralization depends jointly on connectivity to sensory input regions (unilateral S1/ bilateral V1) and language networks.
General conclusions
We find that the neural basis of Braille reading differs from that of visual print reading in several ways. While visual print reading recruits a posterior/anterior vOTC gradient, no such gradient is observed in the vOTC of blind readers of Braille. Blind readers of Braille recruit posterior parietal cortices to a greater degree and in a different way compared to visual print reading in sighted people. Only blind readers show preferential responses to written words in PPC and parieto-occipital cortex. We observed suggestive evidence for an anterior-to-posterior stream of processing in the parietal cortex of blind Braille readers, with anterior parietal areas involved in tactile pattern perception and more posterior parietal, parieto-occipital and dorsal occipital regions involved in word recognition. In blind and sighted readers alike, lateralization of spoken language predicts lateralization of written language. However, on average, spoken word and visual word recognition is highly left-lateralized in sighted people. By contrast, neither Braille reading nor spoken word recognition is strongly left-lateralized in people who are born blind. In blind readers of Braille, reading hand also affects lateralization of responses to Braille.
Comparing the neural basis of reading across blind and sighted people suggests that there is no ‘standard reading brain.’ The input modality of symbols influences the neural basis of their recognition. At the same time, similar anatomical principles govern the localization of visual print and tactile Braille. Connectivity patterns constrain the localization of visual print and tactile Braille reading alike.
Credit Author Statement
J.K., S.K., and M.B. designed research; J.K., S.K., and M.B. performed research; M.T. analyzed data; M.T., E.S. and M.B. wrote the paper.
Declaration of Competing Interest
The authors declare no competing interests.
Acknowledgments
We would like to thank all our blind and sighted participants, the blind community and the National Federation of the Blind. Without their support, this study would not be possible. This work was supported by grants from the Johns Hopkins Science of Learning Institute (80034917) and the NIH/NEI (R01 EY027352-01). We would also like to thank the F. M. Kirby Research Center for Functional Brain Imaging at the Kennedy Krieger Institute for their assistance in data collection.
References
- 1.↵
- 2.↵
- 3.↵
- 4.↵
- 5.↵
- 6.↵
- 7.↵
- 8.↵
- 9.↵
- 10.↵
- 11.↵
- 12.↵
- 13.↵
- 14.↵
- 15.↵
- 16.↵
- 17.↵
- 18.↵
- 19.↵
- 20.↵
- 21.↵
- 22.↵
- 23.
- 24.↵
- 25.
- 26.↵
- 27.↵
- 28.↵
- 29.↵
- 30.↵
- 31.↵
- 32.↵
- 33.↵
- 34.↵
- 35.↵
- 36.↵
- 37.↵
- 38.↵
- 39.↵
- 40.↵
- 41.↵
- 42.↵
- 43.↵
- 44.↵
- 45.↵
- 46.↵
- 47.↵
- 48.↵
- 49.↵
- 50.↵
- 51.↵
- 52.↵
- 53.↵
- 54.↵
- 55.↵
- 56.↵
- 57.↵
- 58.↵
- 59.↵
- 60.↵
- 61.↵
- 62.↵
- 63.↵
- 64.↵
- 65.↵
- 66.↵
- 67.↵
- 68.↵
- 69.↵
- 70.↵
- 71.↵
- 72.↵
- 73.↵
- 74.↵
- 75.↵
- 76.↵
- 77.↵
- 78.↵
- 79.↵
- 80.↵
- 81.↵
- 82.↵
- 83.↵
- 84.↵
- 85.↵
- 86.↵
- 87.↵
- 88.↵
- 89.↵
- 90.↵
- 91.↵
- 92.↵
- 93.↵
- 94.↵
- 95.↵
- 96.↵
- 97.↵
- 98.↵
- 99.↵
- 100.↵
- 101.↵
- 102.↵
- 103.↵
- 104.↵
- 105.↵
- 106.↵
- 107.↵
- 108.↵
- 109.↵
- 110.↵
- 111.↵
- 112.↵
- 113.↵
- 114.↵
- 115.↵
- 116.↵
- 117.↵
- 118.↵
- 119.↵