Abstract
We investigated the experiential bases of knowledge by asking whether people that perceive the world in a different way also show a different neurobiology of concepts. We characterized the brain activity of early-blind and sighted individuals during a conceptual retrieval task in which participants rated the similarity between color and action concepts. Between-categories analysis showed that, whereas multimodal concepts (action) activated a similar fronto-temporal network in the sighted and blind, color knowledge activated partially different brain regions in the two groups, with the posterior portion of the right IPS being significantly more active in the sighted compared to the blind. Interestingly, regions that were similarly activated in sighted and blind during conceptual processing (lpMTG for action, and precuneus for color), showed, however, an increased task-dependent connectivity with occipital regions in the blind. Finally, within-category adaptation analysis showed that word-pairs referring to perceptually similar color or actions led to repetition-suppression in occipital visual areas in the sighted only, whereas adaptation was observed in language-related temporal regions in the blind. Our results show that visual deprivation changes the neural bases of conceptual retrieval, which is partially grounded in sensorimotor experience.
Significance statement Do people with different sensory experience conceive the world differently? We tested whether conceptual knowledge builds on sensory experience by looking at the neurobiology of concepts in early blind individuals. We show that cortical regions involved in the processing of multimodal concepts (actions) are mostly similar in blind and sighted, whereas strictly visual concepts (colors) activated partially different brain topographies. Moreover, brain regions classically involved in conceptual retrieval showed different connectivity profiles in the blind, working in concert with re-organized “visual” areas. Finally, we further demonstrate that perceptual distance between concepts is represented in the visual cortex of the sighted, but not the blind. Blindness changes how the brain implements conceptual knowledge, which is partially grounded in sensorimotor experience.
Introduction
As we think, we navigate and retrieve conceptual knowledge, but the nature of this knowledge is highly debated. One of the major sources of disagreement is whether thinking is more similar to the analog replay of our experience 1,2, or the symbolic computations of a Turing machine 3,4. On one side, empiricist approaches suggest that concepts are based on sensorimotor representations: Thinking is simulating (or re-instantiating) our own sensorimotor experience of the world 5,6. In support of this view, neuroimaging studies have shown that people activate sensorimotor areas when retrieving conceptual knowledge 7–9. For instance, words referring to objects with specific colors or shapes (e.g., tomato, ball), or directly to color and shape properties (e.g., red, squared), may activate occipito-temporal areas such as V4 and LOC, respectively 10–13. Similarly, words referring to action verbs or manipulable objects activate neural networks involved in action execution and observation, such as somatosensory areas, premotor cortices and temporal-occipital areas 10,14.
Yet, empiricist theories of concepts has been repeatedly challenged by more rationalistic approaches suggesting that knowledge is retrieved and represented via abstract symbols and propositional representations that are modality-invariant 15,16. In this view, conceptual representations are particular configurations of interconnected symbols that are neither tactile, nor motor, nor visual. Thus, conceptual retrieval does not require the on-line activation of the sensorimotor system 16. In this view, putative sensorimotor activations elicited during conceptual processing in several studies, could instead be interpreted as the activity of symbolic (experience-invariant) neural computations that may occur in the proximity of sensorimotor cortices 15. One of the strongest line of support for this view comes from the cognitive neuroscience of blindness 15,17. The reasoning is straightforward: If conceptual processing is largely grounded into experience, people who experience the world in a different way should also show a different neurobiology of concepts, at least in part 18. Congenital blindness is an ideal model to test this hypothesis, and the available data seem to support the symbolic view. In a seminal study 19, when sighted and blind were asked to retrieve information about geometric figures (e.g., is a pyramid curved?) or highly visual entities (e.g., is the dusk usually dark?), people in both groups activated the same fronto-temporal semantic retrieval system, but no visual (or other) areas were more active in sighted than blind. Similarly, in other experiments, knowledge about objects’ shape 20, or small and manipulable objects 21 activated the lateral occipital complex; thinking about big non-manipulable objects activated the parahipoccampal place area 22; and processing action verbs (compared to nouns) activated the left posterior middle temporal gyrus 23 in both sighted and blind. These results seem to suggest that blindness leaves the neurobiology of conceptual retrieval largely unchanged, and that experience plays a minor role in shaping mental representations15.
However, previous studies involving congenitally blind people have some important limitations. Most of these studies investigated highly multimodal conceptual categories, such as shape, actions and tools, which can be perceived across different senses. In these cases, blind can re-map visual features to other senses minimizing the impact of visual deprivation 24, especially in brain regions where different sensory information seem to converge. For instance, the anterior ventral occipito-temporal cortex (aVOTC), that showed stunning functional resilience to visual deprivation 22,25, seems to support the integration of visual, auditory and tactile information in multimodal representations 8,9,26–28 that are crucial for categorical knowledge 29 (e.g., distinguish faces from houses, or animals from tools). At this level of the ventral-occipital hierarchy, different categories (e.g., object, animals, tools, actions) are represented in a format that is relatively independent from visual appearance 30–32, thus more likely to be resilient to visual deprivation. In contrast, posterior occipital regions of the brain organize information based on a visual code that is largely independent of categorical membership 30,33. Therefore, to the extent that posterior occipital areas encoding visual features are also recruited during conceptual retrieval 10,34,35, differences between sighted and blind should emerge in these regions, as predicted by empiricist theories.
In our study, we investigated the impact of blindness on how the brain implements conceptual knowledge, in two ways. First, we compared a highly multimodal category such as action, with a highly visual-unimodal category such as color. Since visual experience of actions can be remapped to experience in other senses, we predicted that the neural network involved in retrieving action-categories in the sighted will be preserved in the blind 23. On the contrary, since color knowledge cannot be experienced with nonvisual senses, becoming an abstract property of objects for the blind 36, we predicted to see different brain areas involved in its representation, as a function of visual deprivation. Second, we investigated the neural activity related to within-category perceptual similarity (e.g. similar action Vs different actions). We predicted that perceptual similarity would be represented in posterior occipital areas in the sighted, but not in the blind. These results would (i) be in line with previous results showing that these areas encode visual features in the sighted 30,33, (ii) confirm their involvement in conceptual retrieval 10,34,35 and (iii) show, for the first time to our knowledge, that visual deprivation changes the neuro-biological basis of conceptual processing by precluding visual representations.
Results
Participants (18 Early Blind and 18 Sighted controls; see Table S1 for demographic details) listened to pairs of action (e.g. kick - jump) or color words (e.g., red - yellow) in the MRI, and rated each pair in terms of similarity on a scale from 1 to 5 (1= very different; 5= very similar).
Univariate analysis: Sighted and blind activate a similar brain network for multimodal action concepts but not for unimodal (visual) color concepts
Behavioral analysis: Reaction times analysis using a mixed ANOVA, with Category (action, color) as within-subject factor and Group (sighted, blind) as between-subjects factor, showed no difference between categories (F(1,33)=2.37, p>0.05, η2= 0.07), between groups (F(1,33)=0.074, p>0.05, η2=0.002) and no Category by Group interaction (F(1,33)=0.69, p>0.05, η2=0.02).
fMRI analysis: The contrast Action > Color did not reveal any significant difference between groups, suggesting a comparable neural activity, across sighted and blind, during the retrieval of action concepts (see fig. S1 for details). Indeed, a conjunction analysis between groups showed a common significant activation in the left posterior middle temporal gyrus (lpMTG; Peak= −54, −61, 5; Fig 1A).
Regional BOLD responses are depicted over multiplanar slices and renders of the MNI-ICBM152 template. (A) Suprathreshold cluster (P<0.05 FEW corrected) showing common activity in the lpMTG for the contrast Action > Color in both sighted and early blind (conj., conjunction analysis); (B) Suprathreshold cluster (P<0.05 FEW corrected) showing greater activity in the rIPS, in sighted compared to early blind, for the contrast Color > Action; (C) Suprathreshold cluster (P<0.001 uncorrected; for illustrative purposes only) showing common activity in the precuneus for the contrast Color > Action in both sighted and early blind (conj., conjunction analysis); (D) Suprathreshold clusters (P<0.005 uncorrected; for illustrative purposes only) showing greater connectivity in the occipital areas of early blind people with the lpMTG (PPI for the contrast Action > Color); (E) Barplot (for illustrative purposes only) showing beta weights derived from PPI analysis, with seed in lpMTG, in sighted (gray) and blind (white), in the right and left middle occipital gyrus (MOG) for the contrast Action > Color (arbitrary unit ± SEM); (F) Suprathreshold clusters (P<0.005 uncorrected; for illustrative purposes only) showing greater connectivity in the occipital areas of early blind people with the precuneus (PPI for the contrast Color > Action); (G) Barplot (for illustrative purposes only) showing beta weights derived from PPI analysis, with seed in precuneus, in sighted (gray) and blind (white), in the right and left middle occipital gyrus (MOG) for the contrast Color > Action (arbitrary unit ± SEM);
On the other hand, analysis for the contrast Color > Action revealed a cluster in the right parietal cortex, in and around the right intraparietal sulcus (rIPS), showing higher activity for color concepts in sighted compared to blind (peak= 33, −43, 35; Fig. 1B).
A conjunction analysis between groups, for the contrast Color > Action, did not reveal any common activation between sighted and blind after correction for multiple comparisons at the whole brain level, in line with the hypothesis that color-related conceptual processing engages partially different neural systems as a function of visual deprivation (see fig. S1 for details). However, displaying the conjunction results at a more lenient threshold (p<.001 uncorrected; Fig 1C), we could notice a unique common activity for color concepts in the right precuneus (peak= 6, −55, 26). Accordingly, analysis within groups showed a significant precuneus activity in the blind (peak= 6, −52, 20, p= .04) and a marginally significant activity in the sighted (peak= 0, −61, 29, p= .06), with no significant difference between groups (Table S1; Fig. S1).
Psychophysiological interactions: The precuneus and lpMTG display similar category selectivity across sighted and blind but show different connectivity profiles
We relied on Psychophysiological Interaction (PPI) analysis 37,38 to test whether regions that showed similar categorical preference across groups (lpMTG for action > color, and the precuneus for color > action) also maintain a similar connectivity profile in both groups. In particular, we tested the hypothesis that posterior occipital areas in the blind could be recruited during conceptual retrieval, and connect with conceptual hubs 39 such as the lpMTG and the Precuneus, as a consequence of neural reorganization 40,41 (See also Fig. S2 showing higher activity in the occipital cortex of EB versus SC when processing both conceptual domains compared to rest). With this aim, we selected two ROIs in the left and right middle occipital gyrus (MOG) that (i) are recruited in early blind during high-level conceptual tasks such as language processing and math 42,43; (ii) show increased long range connectivity, in early blind, with extra-occipital regions (e.g., frontal, parietal and ventral temporal cortices) during resting state 42–45 and task-based (PPI) analysis 19.
PPI with seed in the lpMTG revealed an increase of action-selective functional connectivity in both occipital ROIs of blind people compared to their sighted counterpart (lMOG: t= 3.59, p= 0.02; rMOG: t=3.46, p=0.026; Fig 1D & 1E). Similarly, PPI with seed in the precuneus revealed an increase of color-selective functional connectivity in the occipital cortices of blind compared to sighted participants (lMOG: t= 3.12, p= 0.054; rMOG: t=4.09, p=0.007; Fig 1F and 1G). Albeit showing a similar activity profile in sighted and blind during conceptual processing, the lpMTG and the precuneus showed a different connectivity profile as a function of early visual deprivation. Such increase in task-based connectivity suggests that occipital areas in early blind can be flexibly recruited during conceptual processing in interaction with conceptual hubs.
On the other hand, PPI with seed on the right IPS, for the contrast Color>Action, did not reveal any significant color-specific connectivity profile neither in SC nor for the contrast SC>EB.
Adaptation analyses: Within-category similarity is encoded in occipital areas in the sighted but not in the blind
Classic univariate and functional connectivity analyses revealed a multifaceted impact of visual deprivation on conceptual retrieval. Yet, they did not reveal any preferential activity in the visual cortex of sighted more than blind, as would be predicted by an empiricist approach. With adaptation analyses we tested whether this difference emerges when we look for areas that encode perceptual similarity. The rationale was that the direct contrast between pairs with high versus low perceptual differences will induce a release from adaptation 46–48 therefore probing regions that are specifically sensitive to the perceptual distance between concepts. Our prediction was that that the finer grained encoding of perceptual distances would be represented in posterior occipital areas in the sighted, but not in the blind.
Behavioral analysis
Similarity ratings were highly correlated between sighted and blind, both for action (r= .99) and color words (r= .93; Fig. 2A). In order to perform the adaptation analysis we divided the trials in similar pairs (e.g. red - orange) and different pairs (e.g. red - blue), based on each participant’ subjective ratings. Rating distributions for each subject and category (color, action) were divided in 5 intervals with a similar number of items (see Method session for details). Stimulus-pairs in the first two intervals were labeled as different (low similarity ratings), the 3rd interval contained medium pairs, and the 4th-5th intervals similar pairs (high similarity ratings; Fig 2B). Overall, the average number of “different” trials was slightly larger than the “similar” ones (126 vs 115; F=8.41, p=0.007, η2=0.20; Fig. 2C). However, there was no similarity by group interaction (F=0.18, p=0.67, η2=0.004), indicating that this unbalance (that reflected personal judgments of similarity) was the same across SC and EB (fig. 2C). An analysis of reaction times showed that Medium pairs (not analyzed in fMRI) had on average longer latencies than Similar and Different ones (Main Effect of Similarity: F=21.07, p<0.001, η2=0.38). This was expected since pairs that are neither similar nor different would require longer and more difficult judgments. Crucially, there was no difference in reaction times between different (Mean=1.80 sec, SD=0.39) and similar pairs (Mean=1.79 sec, SD=0.37; F=0.09, p=0.76, η 2=0.003), and no interaction between Similarity and Group (F=0.04, p=0.84, η2=0.001; Fig 2D).
(A) Similarity judgments were highly correlated across groups both for actions and color; (B) Conceptual schema of the division of word pairs in “different” and “similar” based on subjective similarity ratings;(C) Barplot depicting the average number of items in the “different”, “medium” and “similar” categories. The number of items in the “different” and “similar” categories is very similar across groups (number of trials ± SEM); (D) Barplot depicting the average reaction time in the “different”, “medium”, and “similar”, the average RTs of the “different” and “similar” categories is very similar across groups (seconds ± SEM).
fMRI analysis
to find brain areas that showed adaptation based on conceptual similarity, we looked at the contrast Different Pairs > Similar Pairs, with Medium pairs as a regressor of no interest. Action and color pairs were considered together since, at the whole brain level, we did not find a significant higher-order interaction between Similarity (different, similar) and Category (see the method section for analysis details). In the sighted, similar concepts led to repetition suppression in several occipital areas (See Fig. 3A), with a significant cluster in and around the left lingual gyrus (peak coordinates: −24, −70, −7). In the blind, instead, adaptation emerged in language-related areas with significant clusters along the middle and superior temporal gyrus, bilaterally (Peak coordinates RH: 57, −28, 8; LH: −60, −10, −7) and in the right precentral gyrus (Peak coordinates: 27, −25, 56; Fig. 3B). Importantly, no adaptation in posterior occipital areas was observed in the blind.
A comparison between groups showed greater adaptation in occipital cortices for sighted compared to blind (Fig. 3C), with peaks in the left superior occipital gyrus (−24, −91, 26), the left lingual gyrus (−24, −70, −7) and the right middle occipital gyrus (27, −85, 11). The contrast Blind > Sighted showed increased adaptation in posterior lateral temporal cortices (PLTC) bilaterally (Fig. 3D). Planned ROI analysis in PLTC, a region that consistently show repetition suppression for semantic similarity 49–51, revealed a significantly greater adaptation for similar concepts in blind more than sighted (Conceptual Similarity by Group interaction; lPLTC= −45 −31 20, t=3.23, P= 0.035; rPLTC= 45 −28 11, t=3.41, P= 0.024).
Regional BOLD responses are depicted over renders of the MNI-ICBM152 template. (A) Suprathreshold clusters showing neural adaptation for similar word pairs in the occipital cortices of sighted participants; (B) Suprathreshold clusters showing neural adaptation for similar word pairs in the temporal and somatosensory-motor cortices of early blind participants; (C) Suprathreshold clusters showing neural adaptation for similar word pairs in sighted compared to blind, and (D) blind compared to sighted; (E) Suprathreshold cluster showing neural adaptation for similar color-word pairs in the sighted, with regional activity localized in the left PCoS/V4; (F) Suprathreshold clusters showing neural adaptation for similar action-word pairs in the sighted, with regional activity spread in different occipital areas including the left PCoS/V4 and the left V5. Cluster threshold at P<0.005 uncorrected, for illustration only.
Finally, we performed planned ROI analysis in the color-sensitive region at the posterior banks of the collateral sulcus (PCoS) corresponding to the V4-complex10,52 and the motion sensitive region V5 53. In area PCoS-V4 we found greater adaptation both for color and action in sighted compared to blind (Conceptual Similarity by Group interaction; peak= −24 −73 −10, t=4.26, P= 0.004; Fig 3 E-F). In contrast, the analysis in V5 showed that repetition suppression was specific for action concepts in the sighted and no adaptation was observed in the blind (Conceptual Similarity by Group by Category interaction; Peak: −51 −76 8, t=3.29, P= 0.037; Fig 3F).
Discussion
Embodied approaches to conceptual knowledge suggest that concepts are grounded in our sensory and motor experience of the world 2,7. A straightforward hypothesis emerging from these theories is that people that perceive the world in a different way should also have different conceptual representations 18. Congenital blindness offers an ideal test-bed for this hypothesis, which is crucial to validate empiricist theories of concepts: If seeing the color red adds nothing to the neural representation of the concept of red, which can be constructed equally by learning its properties through a verbal code, then the idea that we rely on distributed analogical simulations to construct and retrieve knowledge becomes necessarily weak 15.
In our study we tested this hypothesis by characterizing the brain activity of sighted and early blind individuals while they rated the similarity of action and color concepts in fMRI. Whereas the experience of action is highly multimodal, and accessible to blind people from the remaining senses, the sensory experience of color is purely visual, and inaccessible to the blind. This epistemological difference was reflected in the differential brain activations observed in our blind and sighted groups. Sighted and blind activated a similar fronto-temporal network to retrieve action knowledge, with the highest common activity located in the left posterior MTG, and no brain area showing between-groups differences (see fig S1 for details). In contrast, color knowledge activated different brain regions in the two groups, with the posterior portion of the right IPS being significantly more active in the sighted compared to the blind. The IPS is known to be involved in the perception of color52,54,55 as well as other visual features 56–58, and its anatomical position make it a good candidate to work at the interface between perceptual and conceptual representations 54. In particular, it seems to be crucial for retrieving perceptually-based categorical knowledge 54,55. Indeed, the peak of color specific activity that we found (peak coordinates: 33, −43, 35) is very close to the color-specific rIPS area found by Cheadle & Zeki (ref 54; peak coordinates: 30, −39, 45). The lack of visual input in blind people prevents the formation of perceptually-driven color representations, which may limit the contribution of the IPS during the retrieval of color knowledge. This is not the case for action representation, for which perceptual knowledge can be compensated by other senses (e.g., touch, audition). In sum, this result shows that, in the case of visual-unimodal categories for which direct experience cannot be compensated by nonvisual perception, conceptual retrieval is instantiated in blind and sighted through partially different neural networks.
However, in keeping with previous findings 19,20,23, we also found commonalities across the two groups. Both sighted and blind similarly engaged the lpMTG during action processing and the Precuneus during color processing. Interestingly, though, psychophysiological interactions (PPI) showed that early visual deprivation changes the connectivity profile of these regions, increasing their functional coupling with occipital regions in the blind. This result suggests that the occipital cortex in early blind is re-organized to extend its integration into conceptual selective networks, highlighting further how visual deprivation impact on the neurobiology of conceptual knowledge. Previous studies have shown that occipital areas in the early blind are recruited for high-level conceptual tasks such as language processing, semantic retrieval and math 42,43,59–61, and that they increase long-range connectivity with frontal and parietal cortices during rest 42–45 and inferior temporal cortices during semantic judgments 19. However, here we showed that early blind seem to rely on enhanced connectivity between occipital cortices and temporo-parietal conceptual hubs during conceptual processing. Albeit our data remain correlational, they suggest that EBs’ occipital regions are not activated independently of other “classic” regions involved in conceptual retrieval, but they instead work in concert.
Classic univariate and PPI analysis showed that early visual deprivation modifies the neural network supporting the retrieval of color knowledge and the connectivity pattern of temporal and parietal areas involved in conceptual processing. Yet, these analyses failed to show any concept-related activity in visual areas, in sighted more than blind, as an empiricist account would predict. This is not surprising, since both action and color are highly-visual categories for sighted, and should both engage visual simulations.1In order to reveal perceptual-specific mechanisms of conceptual retrieval we needed to investigate finer-grained patterns of representations based on perceptual similarity. We did so with our adaptation analysis, probing regions that are specifically sensitive to the perceptual distance between concepts of a same class. Our results demonstrated that, within each conceptual domain (action and color), concepts that are judged by individual subject as perceptually different induced a release from adaptation in several posterior occipital regions (e.g., Lingual Gyrus, V5, PCoS-V4, middle and superior occipital gyri) in the sighted but not in the blind. These posterior occipital regions, in the sighted are known to encode visual features and to be sensitive to visual similarity independently of categorical membership 30,33,62,63. Our data corroborate the hypothesis that these regions are also involved in conceptual retrieval 10,34 supporting representations related to visual experience that are not accessible to the blind.
The results of the adaptation analysis are crucial to support the hypothesis that conceptual retrieval consist on a partial replay of our perceptual experience 2,64, suggesting that visual features of objects and events, encoded in posterior occipital cortices and retrieved during conceptual processing, are not part of the conceptual representations of people that have never seen. Interestingly, our PPI analysis (see also Fig. S2) show that during conceptual processing the occipital cortices of blind are actively involved in conceptual retrieval too, paralleling the activation patterns of classic conceptual hubs such as the precuneus or the pMTG. This abnormal occipital activation in blind compared to sighted has been reported other times in the context of conceptual processing 19,43,59, and sometimes used as an argument against empiricist accounts: Not only visual cortices do not participate in conceptual retrieval in the sighted, but, if anything, they do it in the blind only 15,19. In our study, we demonstrated that this finding is not at odds with empiricist arguments, showing that occipital cortices of both sighted and blind are involved in conceptual retrieval, but with different functions and probably at different levels of representation: Occipital cortex support sensorimotor simulations of visual features during conceptual retrieval in the sighted, whereas occipital cortices are re-organized to engage in more general processes related to conceptual retrieval in the blind (albeit these processes need to be better specified in future studies).
Another important result revealed by the adaptation analysis is that early blind participants showed greater repetition suppression in the posterior lateral-temporal cortices (PLTC), compared to sighted. Many studies before us have found that conceptually similar (e.g., dog - wolf) or semantically associated (e.g., dog - leash) words can lead to repetition suppression in PLTC 49–51. Although it is still unclear what level of conceptual knowledge is represented in that region 49, there is some agreement that the PLTC stores auditory representations of words, that are connected to distributed semantic representations in the brain 49,65–67. In this framework, the PLTC may work at the interface between wordforms and semantic knowledge 65,66, and a greater activity in the blind can index a larger use of verbal knowledge in this population 68,69 to compensate for the absence of visual information.
From a broader theoretical point of view, the results of this study are in line with a hierarchical model of conceptual representations based on progressive levels of abstraction 8–10,28,70. At the top of the hierarchy, multimodal representations may co-exist with purely symbolic ones organized in a linguistic/propositional code 16. It is possible that a large number of conceptual processes can take place involving only the higher-levels of representation, supported by conceptual hubs such as pMTG and precuneus 7,10,39, as well as category-sensitive regions in anterior VOTC 7,32 or language regions in temporal cortices 71,72. Moreover, multimodal and abstract representations may be ideal to interact with lexical entries in the context of compositional 70, highly automatic 73 or shallow 74 semantic representations that are continuously required by natural language processing 70. This level of representation can account for the obvious fact that congenitally blind people can think about colors and their perceptual properties 75,76 although they cannot simulate vision.
On the other hand, modality-specific simulation in sensory areas (e.g. visual, auditory, somatosensory), as the one highlighted by our adaptation analysis, may become central in deeper and more deliberate stages of conceptual processing, providing situated, specific and sometimes imaginistic representations. In this view, modality-specific simulations may be crucial to support the subjective experience of knowing providing the phenomenological qualia of conscious thinking 9,77. Indeed, early blind people lack visual qualia, so that color and other visual features are necessarily represented as abstract entities or remapped on different senses 36. In our study, we were able to show the neural basis of this phenomenological difference between the mental representations of sighted and early blind people.
Materials and methods
Participants
Thirty-six participants took part to this experiment: 18 early blinds (EB; 8F) and 18 sighted controls (SC; 8F). Participants were matched pairwise for gender, age, and years of education (Table S1).
All the blind participants lost sight at birth or before 3 years of age and all of them reported not having visual memories (Table S2). All participants were blindfolded during the task. The ethical committee of the Besta Neurological Institute approved this study (protocol fMRI_BP_001) and participants gave their informed consent before participation.
Stimuli
We selected six Italian color words (rosso/red, giallo/yellow, arancio/orange, verde/green, azzurro/blue, viola/purple), and six Italian action words (pugno/punch, graffio/scratch, schiaffo/slap, calcio/kick, salto/jump, corsa/run). Words were all highly familiar nouns and were matched across categories (color, action), by number of letters (Color: mean= 5.83, sd= 0.98); Action: mean= 6, sd= 1.23), frequency (Zipf scale; Color: mean= 4.02, sd= 0.61; Action: mean= 4.18, sd= 0.4), and orthographic neighbors (Coltheart’s N; Color: mean= 14, sd= 9.12; Action: mean= 15.33, sd= 12.42).
Auditory files were made using a voice synthesizer (talk to me), with a female voice, and edited into separated audio files with the same auditory properties (44100 Hz, 32 bit, mono, 78 dB of intensity). The original duration of each audio file (range 356 – 464 ms) was extended or compressed to 400 ms using the PSOLA (Pitch Synchronous Overlap and Add) algorithm and the sound-editing software Praat 78. All the resulting audio files were highly intelligible.
Procedure
We designed a fast event-related fMRI paradigm during which participants listened to pairs of color and action words. In each trial the two words were played one after the other with a stimulus onset asynchrony (SOA) of 2000 ms.
The inter-trial interval ranged between 4000 and 16000 ms. Participants were asked to judge the similarity of the two colors or the two actions from 1 to 5 (1: very different, 5: very similar). Responses were collected via an ergonomic hand-shaped response box with five keys (Resonance Technology Inc.). All participants used their right hand to provide responses (thumb = very different, pinky = very similar). Participants were told that they had about 4 seconds to provide a response after the onset of the second word of the pair and they were encouraged to use all the scale (1 to 5). Furthermore, the instruction was to judge the similarity of colors and actions based on their perceptual properties (avoiding reference to emotion, valence, or other non perceptual characteristics). Blind participants were told to judge color pairs on the basis of their knowledge about the perceptual similarity between colors.
Color and action words were presented in all possible within-category combination (15 color pairs, 15 action pairs). Each pair was presented twice in each run, in the two possible orders (e.g., red-yellow, yellow-red). Thus, there were 60 trials in each run and the experiment consisted in 5 runs of 7 minutes. Stimuli were pseudorandomized using optseq2 to optimize the sequence of presentation of the different conditions. Three different optimized lists of trials were used across runs. List order was counterbalanced across subjects.
One early blind was excluded from the analyses because the subject answered to less than 70% of the trials throughout the experiment due to sleepiness. One run of one sighted subject was excluded from the analysis because of a technical error during the acquisition, and two other runs (one in a sighted subject, one in a blind subject) were excluded since the subject answered to less than 70% of the trials in that specific run.
Conceptual similarity ratings
In order to perform the adaptation analysis, we divided the trials in similar pairs (e.g. red - orange) and different pairs (e.g. red - blue). We did so based on the participants’ subjective ratings. For each participant we took the average rating for each of the 15 word-pairs in the action and color categories. Then we automatically divided the 15 pairs in 5 intervals (4 quantiles) of nearly equal size. This subdivision was performed using the function quantile, in R 79, that divides a probability distribution into contiguous intervals of equal probabilities (i.e., 20%). The pairs in the first two intervals were the different pairs (low ratings of similarity), the pairs in the 3rd interval were the medium pairs, and the pairs in the 4th and 5th intervals were the similar pairs (See fig 2B). However, in some cases, ratings distributions were slightly unbalanced, due to the tendency of some subjects to find more “very different” pairs than “very similar” pairs. In these cases (8 subjects for action ratings [3 EB]; 4 subjects for Color Ratings [1 EB]), the automatic split in 5 equal interval was not possible. Thus, we set the boundary between the 2nd and 3rd interval at the ratings average (for that given subject), and set to the minimum (1 or 2, depending on the cases) the number of items in the 3rd interval (not analyzed), in order to balance as much as possible the number of pairs in the Different and Similar groups. This procedure made so that in these special cases (as well as in all the others), the rating values of different pairs were always below the mean, and the values of similar pairs was always above the mean. Fig. S3, S4, S5, S6, in the supplementary information, show subject-specific rating distributions.
MRI data acquisition
Brain images were acquired at the Neurological Institute Carlo Besta in Milano on a 3-Tesla scanner with a 32-channel head coil (Achieva TX; Philips Healthcare, Best, the Netherlands) and gradient echo planar imaging (EPI) sequences.
In the event-related experiment, we acquired 35 slices (voxel size 3 × 3 × 3.5) with no gap. The data in-plane matrix size were 64 × 64, field of view (FOV) 220mm X 220mm, time to repetition (TR)= 2 s, flip angle 90 degrees and time to echo (TE)=30 ms. In all, 1210 whole-brain images were collected during the experimental sequence. The first 4 images of each run were excluded from the analysis for steady-state magnetization. Each participant performed 5 runs, with 242 volumes per run.
Anatomical data was acquired using a T1-weighted 3D-TFE sequence with the following parameters: 1 × 1 × 1 mm voxel size, 240 × 256 matrix size, 2.300 ms TR, 2.91 ms ET, 900 ms TI, 256 FoV, 160 slices.
MRI data analysis
We analyzed the fMRI data using SPM12 (www.fil.ion.ucl.ac.uk/spm/software/spm12/) and Matlab R2014b (The MathWorks, Inc.).
Preprocessing
Preprocessing included slice timing correction of the functional time series 80, realignment of functional time series, coregistration of functional and anatomical data, spatial normalization to an echoplanar imaging template conforming to the Montreal Neurological Institute (MNI) space, and spatial smoothing [Gaussian kernel, 6 mm full-width at half-maximum (FWHM)]. Serial autocorrelation, assuming a first-order autoregressive model, was estimated using the pooled active voxels with a restricted maximum likelihood procedure, and the estimates were used to whiten the data and design matrices.
Data analysis
Following preprocessing steps, the analysis of fMRI data, based on a mixed-effects model, was conducted in two serial steps accounting, respectively, for fixed and random effects. In all the analysis the regressors for the conditions of interest consisted of a event-related boxcar function convolved with the canonical hemodynamic response function according to a variable epoch model 81. Movement parameters derived from realignment of the functional volumes (translations in x, y, and z directions and rotations around x, y, and z axes), and a constant vector, were also included as covariates of no interest. We used a high-pass filter with a discrete cosine basis function and a cutoff period of 128 s to remove artifactual low-frequency trends.
Univariate analysis
For each subject, changes in regional brain responses were estimated through a general linear model including 2 regressors corresponding to the two categories Action and Color. The onset of each event was set at the beginning of the first word of the pair, the offset was determined by the subject response, thus included reaction time 81. Linear contrasts tested for action-specific [Action > Color] and color-specific [Color > Action] BOLD activity.
These linear contrasts generated statistical parametric maps [SPM(T)]. The resulting contrast images were then further spatially smoothed (Gaussian kernel 5mm FWHM) and entered in a second-level analysis, corresponding to a random-effects model, accounting for inter-subject variance. One-sample t-tests were run on each group separately. Two-sample t-tests were then performed to compare these effects between groups (Blind vs Sighted) and to perform conjunction analyses to observe if the two groups presented similar activated networks for the two contrasts of interests.
Connectivity analysis
Psychophysiological interaction (PPI) analyses were computed to identify brain regions showing a significant change in the functional connectivity with seed regions (the right precuneus, the left pMTG and the rIPS) that showed a significant activation (p<.001, uncorrected) in the [(EB Conj. SC) X (Color > Action)] contrast, the [(EB Conj. SC) X (Action > Color)] contrast, and the [(SC > EB) X (Action > Color)] contrast respectively. In each individual, time series of activity (principal eigenvariate) were extracted from a 8 mm sphere centered on the nearest local maxima to the identified peaks in the second-level analysis (Note that centering the sphere on the peak itself does not change the ROI Analysis results, see Supplementary Information). New linear models were generated at the individual level, using three regressors. One regressor represented the psychological condition of interest (action or color trial). The second regressor was the physiological activity extracted in the reference area. The third regressor represented the interaction of interest between the first (psychological) and the second (physiological) regressor. The design matrix also included movement parameters and a constant vector as regressors of no interest. A significant PPI indicated a change in the regression coefficients between any reported brain area and the seed region, related to the experimental conditions (Color>Action or Action>Color). Next, the individual summary statistic images obtained at the first-level (fixed-effects) analysis were spatially smoothed (5 mm FWHM Gaussian kernel) and entered in a second-level (random-effects) analysis using a two-sample t test contrasting the two groups.
Adaptation analysis
For each subject, the general linear model included 6 regressors corresponding to the 3 levels of similarity (different, medium, similar) in each condition (color, action). Color and Action pairs in the medium condition were modeled as regressors of no interest.
At the first level of analysis, linear contrasts tested for Repetition Suppression [Different > Similar] collapsing across categories (Action, Color). The same contrasts were then repeated within each category [Color Different > Color Similar; Action Different > Action Similar]. Finally, we tested for the Similarity by Category interactions, testing whether the adaptation was stronger in one category compared to the other (e.g., [Color Different > Color Similar] > [Action Different > Action Similar]).
These linear contrasts generated statistical parametric maps [SPM(T)]. The resulting contrast images were then further spatially smoothed (Gaussian kernel 5mm FWHM) and entered in a second-level analysis (RFX), corresponding to a random-effects model, accounting for inter-subject variance. One-sample t-tests were run on each group separately. Two-sample t-tests were then performed to compare these effects between groups (Blind vs Sighted).
Statistical inferences
At the whole brain level, statistical inference was made at a corrected cluster level of P < 0.05 FWE (with a standard voxel-level threshold of P < 0.001 uncorrected) and a minimum cluster-size of 50 voxels. ROI analysis based on Small Volume Correction were thresholded at p<0.05 FEW at the voxel level.
ROI definition and analysis
Occipital ROI for the PPI analyses were defined as following. Two peak-coordinates were taken from previous studies 42,43 showing the involvement of EB occipital areas in high-level functions such as language (left MOG [-36, −90, −1]) and mathematics (right MOG [33, −82, 9]). These areas also showed increased long range connectivity (in early blind) with frontal and parietal areas during rest 42–45.
The V4 and V5 ROI were drawn from the literature, considering both perceptual localizers, as well as evidence from semantic/conceptual task. We selected 3 peak coordinates for area V5. The first [-47, −78, −2] from a highly-cited study contrasting the perception of visual motion vs static images 53. The second [-44, −74, 2] from a study 82 showing V5 sensitivity to motion sentences (e.g., “The wild horse crossed the barren field”). The third from a research on the on-line meta-analysis tool Neurosynth (http://neurosynth.org/) for the topic “action”. In Neurosynth, the area in the occipital cortex with the highest action-related activation was indeed V5 (peak coordinates: −50, −72, 2). To avoid ROI proliferation, we averaged these 3 peak-coordinates in order to obtain a single peak (average peak: −47, −75, 1).
As for V4, we selected the color-sensitive occipital ROI considering perceptual localizers, as well as evidence of color-specific activity from semantic/conceptual task. Fernandino et al. 10 reported a color sensitive area in the left posterior collateral sulcus (ColS; at the intersection between the Lingual and the Fusyform gyrus; MNI peak coordinates: −16, −71, −12) associated with color-related words. This peak is close to the posterior-V4 localization done by Beauchamps and colleagues (peak coordinates: −22, −82, −16) in a MRI version of the Farnsworth– Munsell 100-Hue Test 52. A search in neurosynth with the keyword “color” also highlighted a left posterior color-sensitive region along the ColS with peak coordinates [-24, −90, −10]. We averaged these 3 peaks to find the center of our region of interest (average peak: −21, −81, −13).
The posterior lateral-temporal cortex ROI (PLTC) was taken from 3 studies showing semantic repetition suppression in that area. Bedny and colleagues 49 observed increased neural adaptation in PLTC (peak coordinates: 57, −36, 21) for repeated words (fan - fan), when the words were presented in a similar context (summer – fan; ceiling - fan), compared to when different context triggered different meanings (e.g., admirer – fan; ceiling – fan). This result conceptually replicated previous studies 50,51 showing semantic adaptation in the bilateral PLTC for related (e.g., dog - cat) vs unrelated (e.g., dog - apple) word pairs (peak coordinates: −42, - 27, 9 and −51, −22, 8). These 3 peaks were averaged to find the center of our region of interest in both hemispheres (average peak: ±50, −28, 13).
All ROI analyses were performed using Small Volume Correction using spheres with a 10mm radius. Within the ROI, results were considered significant at a threshold of p<0.05, FEW-corrected. Here, and throughout the paper, brain coordinates are reported in MNI space.
Author contributions
R.B. and O.C. designed the research; R.B., S.F., A.N. and V.C. performed the research; R.B. analyzed the data in interaction with O.C.; R.B. and O.C. drafted the paper; all authors revised and edit the draft, and agreed on the final version of the manuscript.
Acknowledgement
This work was supported by a European Research Council starting grant (MADVIS grant #337573) attributed to OC. OC is a research associate at the Fond National de Recherche Scientifique de Belgique (FRS-FNRS). We wish to extend our gratitude to the Michela Picchetti, Mattia Verri and Alberto Redolfi for the technical support during fMRI acquisition. We are also extremely thankful to our blind participants and the Unione Ciechi e Ipovedenti in Trento, Milano, Savona, Trieste, and the Blind Institute of Milano.
Footnotes
1 It is worth noticing that previous studies have found activations in the anterior fusiform gyrus (anterior V4) for the univariate contrast color > action 12,13,83. However, all these experiments involved judgments about the color of objects, and words were presented visually. On the other hand, studies that used simple color words (e.g., red, blue, green) and auditory stimuli, failed to find colorrelated activity in the anterior fusiform 84–86, in line with our results. Indeed, colorspecificity in anterior V4 may be specific for object color or refer, more generally, to multimodal superficial properties of objects 10.
References
- 1.↵
- 2.↵
- 3.↵
- 4.↵
- 5.↵
- 6.↵
- 7.↵
- 8.↵
- 9.↵
- 10.↵
- 11.
- 12.↵
- 13.↵
- 14.↵
- 15.↵
- 16.↵
- 17.↵
- 18.↵
- 19.↵
- 20.↵
- 21.↵
- 22.↵
- 23.↵
- 24.↵
- 25.↵
- 26.↵
- 27.
- 28.↵
- 29.↵
- 30.↵
- 31.
- 32.↵
- 33.↵
- 34.↵
- 35.↵
- 36.↵
- 37.↵
- 38.↵
- 39.↵
- 40.↵
- 41.↵
- 42.↵
- 43.↵
- 44.
- 45.↵
- 46.↵
- 47.
- 48.↵
- 49.↵
- 50.↵
- 51.↵
- 52.↵
- 53.↵
- 54.↵
- 55.↵
- 56.↵
- 57.
- 58.↵
- 59.↵
- 60.
- 61.↵
- 62.↵
- 63.↵
- 64.↵
- 65.↵
- 66.↵
- 67.↵
- 68.↵
- 69.↵
- 70.↵
- 71.↵
- 72.↵
- 73.↵
- 74.↵
- 75.↵
- 76.↵
- 77.↵
- 78.↵
- 79.↵
- 80.↵
- 81.↵
- 82.↵
- 83.↵
- 84.↵
- 85.
- 86.↵