Elsevier

Brain and Language

Volume 121, Issue 3, June 2012, Pages 273-288
Brain and Language

Review
The cortical organization of lexical knowledge: A dual lexicon model of spoken language processing

https://doi.org/10.1016/j.bandl.2012.03.005Get rights and content

Abstract

Current accounts of spoken language assume the existence of a lexicon where wordforms are stored and interact during spoken language perception, understanding and production. Despite the theoretical importance of the wordform lexicon, the exact localization and function of the lexicon in the broader context of language use is not well understood. This review draws on evidence from aphasia, functional imaging, neuroanatomy, laboratory phonology and behavioral results to argue for the existence of parallel lexica that facilitate different processes in the dorsal and ventral speech pathways. The dorsal lexicon, localized in the inferior parietal region including the supramarginal gyrus, serves as an interface between phonetic and articulatory representations. The ventral lexicon, localized in the posterior superior temporal sulcus and middle temporal gyrus, serves as an interface between phonetic and semantic representations. In addition to their interface roles, the two lexica contribute to the robustness of speech processing.

Highlights

► Spoken language processing relies on parallel lexica in the dorsal and ventral speech streams. ► The pMTG mediates the mapping between sound and meaning in the ventral stream lexicon. ► The SMG mediates the mapping between sound and articulation in the dorsal stream lexicon. ► Both lexica may play a role in speech perception and production. ► Uniquely lexical properties influence behavioral/neural measures of both streams.

Introduction

This paper presents a new model of how lexical knowledge is represented and utilized and where it is stored in the human brain. Building on the dual pathway model of speech processing proposed by Hickok and Poeppel, 2000, Hickok and Poeppel, 2004, Hickok and Poeppel, 2007, its central claim is that representations of the forms of spoken words are stored in two parallel lexica. One lexicon, localized in the posterior temporal lobe and forming part of the ventral speech stream, mediates the mapping from sound to meaning. A second lexicon, localized in the inferior parietal lobe and forming part of the dorsal speech stream, mediates the mapping between sound and articulation.

Lexical knowledge is an essential component of virtually every aspect of language processing. Language learners leverage the words they know to infer the meanings of new words based on the assumption of mutual exclusivity (Merriman & Bowman, 1989). Listeners use stored lexical knowledge to inform phonetic categorization (Ganong, 1980) and to guide processes including lexical segmentation (Gow & Gordon, 1995), perceptual learning (Norris, McQueen, & Cutler, 2003) and the acquisition of novel wordforms (Gaskell & Dumay, 2003). Lexically indexed syntactic information also guides the assembly and parsing of syntactic structures (Bresnan, 2001, Lewis et al., 2006). By some estimates, a typical literate adult English speaker may command a vocabulary of 50,000 to 100,000 words (Miller, 1991) in order to achieve these goals. Given this background, it is important to understand where and how words are represented in the brain.

Studies of this question date to the first scientific papers on the neural basis of language. In 1874 Carl Wernicke described a link between damage to the left posterior superior temporal gyrus (pSTG) and impaired auditory speech comprehension. He hypothesized that the root of the impairment was damage to a putative permanent store of word knowledge that he termed the wortshatz or “treasury of words”. In his model, this treasury consisted of sensory representations of words that interfaced with both a frontal articulatory center and a widely distributed set of conceptual representations in motor, association and sensory cortices. In this model, Wernicke was careful to distinguish between permanent “memory images” of the sounds of words, and the effects of “sensory stimulation”, a notion akin to activation associated with sensory processing or short-term buffers (Wernicke, 1874/1969). The broad dual pathway organization of Wernicke’s model has been supported by modern research (Hickok and Poeppel, 2000, Hickok and Poeppel, 2004, Hickok and Poeppel, 2007, Scott, 2005, Scott and Wise, 2004), but his interpretation of the left STG as the location of a permanent store of auditory representation of words is open to debate.

The strongest support for the classical interpretation of the pSTG as a permanent store of lexical representations comes from BOLD imaging studies that show that activation of the left pSTG and adjacent superior temporal sulcus (STS) is sensitive to lexical properties including word frequency and neighborhood size (Graves, Grabowski, Mehta, & Gordon, 2007; Okada & Hickok, 2006). Neighborhood size is a measure of the number of words that closely resemble the phonological form of a given word. This result is balanced in part by evidence that a number of regions outside of the pSTG/STS are also sensitive to these factors (c.f. Prabhakaran, Blumstein, Myers, Hutchinson, & Britton, 2006; Goldrick & Rapp, 2006; Graves et al., 2007) and directly modulate pSTG/STS activation during speech perception (Gow and Segawa, 2009, Gow et al., 2008). This raises the possibility that sensitivity to lexical properties is referred from other areas, and that the STG/STS acts as a sensory buffer where multiple information types converge to refine and perhaps normalize transient representations of wordform.

This view of the STG/STS is consistent with both neuropsychological and neuroimaging evidence. In the 1970s and 1980s aphasiologists noted that damage to the left STG does not lead to impaired word comprehension (Basso et al., 1977, Blumstein et al., 1977, Blumstein et al., 1977, Damasio and Damasio, 1980, Miceli et al., 1980). A review of BOLD imaging studies by Hickok and Poeppel (2007) showed consistent bilateral activity in the mostly posterior STG in speech-resting state contrasts and adjacent STS when participants listened to speech as compared to listening to tones or less speech-like complex auditory stimuli. They interpreted this pattern as evidence that the bilateral superior temporal cortex is involved in high-level spectrotemporal auditory analyses, including the acoustic–phonetic processing of speech. This spectrotemporal analysis could in turn be informed by top-down influences from permanent wordform representations stored in other parts of the brain on the STG to produce evolving transient representations of phonological form that are consistent with higher level linguistic constraints and representations. This hypothesis is discussed in Section 7.

At the same time that aphasiologists and neurolinguists were recharacterizing the function of the STG, psycholinguists were developing a more nuanced understanding of lexical processing. A distinction emerged between spoken word recognition, the mapping of sound onto stored phonological representations of words, and lexical access, the activation of representations of word meaning and syntactic properties. This distinction was reinforced by studies of patients who showed a double dissociation between the ability recognize words and the ability to understand them. Some patients had preserved lexical decision but impaired word comprehension (Franklin et al., 1994, Franklin et al., 1996, Hall and Riddoch, 1997), while others showed relatively preserved word comprehension with deficient lexical decision or phonological processing (Blumstein et al., 1977, Caplan and Utman, 1994). At a higher level, some patients showed more circumscribed deficits in word comprehension coupled with specific deficits in the naming of items in certain categories including colors and body parts (Damasio et al., 1979, Dennis, 1976). This fractionation of lexical knowledge was accompanied by a widening list of brain structures associated with lexical processing. Disturbances in various aspects of spoken word recognition, comprehension and production were associated with damage to regions in the temporal, parietal and frontal lobes (c.f. Coltheart, 2004, Damasio and Damasio, 1980, Gainotti et al., 1986, Patterson et al., 2007).

The advent of functional neuroimaging techniques introduced invaluable new data that underscore the conceptual challenges of localizing wordform representations. Three types of studies have dominated this work: (1) word-pseudoword contrasts, (2) repetition suppression/enhancement designs, and (3) designs employing parametric manipulation of lexical properties. Many studies have contrasted activation associated with listening to words versus pseudowords (Binder et al., 2000, Kotz et al., 2002, Newman and Twieg, 2001, Majerus et al., 2005, Bellgowan et al., 2003, Prabhakaran et al., 2006, Rissman et al., 2003, Vigneau et al., 2005, Xiao et al., 2005, Orfanidou et al., 2006, Raettig and Kotz, 2008, Sabri et al., 2008, Davis et al., 2009). These studies differ by task and in the specific wordform properties of both word and pseudoword properties. Nevertheless, several reviews and metanalyses have found several systematic trends in these data (Davis and Gaskell, 2009, Raettig and Kotz, 2008). A metanalysis of 11 studies by Davis and Gaskell (2009) found 68 peak voxels that show more activation for words than pseudowords at a corrected level of significance. These included left hemisphere voxels in the anterior and posterior middle and superior temporal gyri, the inferior temporal and fusiform gyri, the inferior and superior parietal lobules, supramarginal gyrus, and the inferior and middle frontal gyri, In the right hemisphere, words produced more activation than nonwords in the middle and superior temporal gyri, supramarginal gyrus, and precentral gyrus. The same study also showed significantly more activation by pseudowords than words in 29 regions including a voxels in left mid-posterior and mid-anterior superior temporal gyrus, left posterior middle temporal gyrus, and portions of the left inferior frontal gyrus, the right superior and middle temporal gyri.

While these studies would appear to bear on the localization of the lexicon, it is important to note the lexicon is rarely invoked in this work. This subtraction is generally associated with the broader identification of brain regions supporting “lexico-semantic processing” (c.f. Raettig & Kotz, 2008) or “word recognition” (c.f. Davis & Gaskell, 2009). There are several reasons to suspect that a narrower reading of these subtractions that directly and uniquely ties them to wordform localization is unviable. Recognizable words trigger a cascade of representations and processes related to their semantic and syntactic properties that pseudowords either do not trigger, or trigger to a different extent.1 As result, many of the regions that are activated in word-pseudoword subtractions may be associated with the representation of information that is associated with wordforms, and not just wordforms themselves.

Behavioral and neuroimaging results provide converging evidence that suggests another limitation of the word-pseudoword subtraction as a tool for localizing wordform representations. One can imagine a system in which words activated stored representations of form, but nonwords did not. Given such a system, a word-pseudoword subtraction could be used to localize the lexicon. However, evidence from behavioral and neuroimaging studies suggests that pseudowords are represented using the same resources that are used to represent words. A number of behavioral results in tasks including lexical decision, naming, and repetition show that the processing of nonwords is influenced by the degree to which they resemble real words (c.f. Frisch et al., 2000, Gathercole and Martin, 1996, Gathercole et al., 1991, Luce and Large, 2001, Saito et al., 2003, Vitevitch and Luce, 1998, Vitevitch and Luce, 1999). The overlap in operations is masked by word-pseudoword subtractions, but is apparent in BOLD results that employ resting state subtractions. Binder et al., 2000, Xiao et al., 2005 showed almost identical patterns of activation in word-resting state and pseudoword-resting state subtractions. The only differences they reported were a tendency for more bilateral activation for words in the ventral precentral sulcus and pars opercularis in the Binder et al. study and less activation in the parahippopcampal region in the Xiao et al. study. Moreover, several studies have shown that pseudoword BOLD activation is influenced by the degree to which pseudowords resemble known words, with word-like pseudowords producing activation patterns that were more similar to those produced by familiar words than those produced by less-wordlike tokens (Majerus et al., 2005, Raettig and Kotz, 2008). Evidence for a shared neural substrate for the representation of words and pseudowords has implications for the nature of wordform representations (discussed in Section 2). Moreover, it suggests that differential activation produced by listening to words and pseudowords relates to form properties of pseudowords that are not generally controlled for in this research.

Repetition suppression and enhancement designs offer a more targeted tool for localizing wordform representations. In word recognition tasks, repeated presentation of the same items leads to a reduction in response latency and increase in accuracy. This type of repetition priming is mirrored at a physiological level by repetition suppression and enhancement, in which repetition of a stimulus leads to changes in localized BOLD responses (see review by Henson, 2003). Several studies using passive listening to meaningful words have demonstrated repetition suppression effects in left mid-anterior STS (Cohen et al., 2004, Dehaene-Lambertz et al., 2006). This finding was replicated by Buschbaum and D’Esposito (2009) who used an explicit “new/old” recognition judgment. They also found repetition enhancement or reactivation at the boundary of bilateral pSTG, anterior insula and inferior parietal cortex including the SMG.

The fact that words were used in these studies does not necessarily indicate that repetition effects reflect lexical activation. Activation changes could reflect representation or processing at any level (e.g. auditory, acoustic–phonetic, phonemic, lexical). In order to directly tie these effects to lexical representation it is necessary to control for the contribution of non-lexical repetition. Orfanidou et al. (2006) addressed this issue by using different speakers for first and second presentations of words to minimize the influence of auditory representation, and by contrasting repetition effects associated with phonotactically matched word and pseudoword stimuli to target specifically lexical properties. They found no evidence of interaction between lexicality and repetition in any voxel in whole brain comparisons. This result is again consistent with the notion that word and pseudoword representation share a common neural substrate. Analyses collapsing across lexicality showed significant repetition suppression in the supplemental motor area (SMA), and bilateral inferior frontal posterior inferior temporal regions as well as repetition enhancement in bilateral parietal, orbitofrontal and dorsal frontal regions as well as the right posterior inferior temporal gyrus and a region including the right precuneas and adjacent parietal lobe. The lack of anterior STS suppression in these results may reflect the diminished role of auditory effects due to the speaker manipulation. However, the lack of orthogonal manipulation of phoneme, syllable or diphone repetition make it unclear whether these effects are directly attributable to lexical representation.

The other primary BOLD imaging strategy for localizing lexical representation involves contrasts that rely on parametric manipulation of specifically lexical properties including word frequency, phonological neighborhood size and lexical competitor environment. This strategy (which is discussed again in Section 3) is less widely used than word-pseudoword contrasts or repetition suppression/enhancement techniques, but has been explored by several groups. In an auditory lexical decision task, Prabhakaran et al. (2006) found differential activation based on word frequency in left pMTG extending into STG and left aMTG. In contrast, Graves et al. (2007) found frequency sensitivity in left hemisphere SMG, pSTG, and posterior occipitotemporal cortex and bilateral inferior frontal gyrus in a picture naming task. These results differ, but do show some overlapping STG activation and adjacent activations in the left posterior temporal lobe associated with word frequency. Differences in frequency sensitivity in the two studies in other areas may be related to differences in the task demands imposed by lexical decision versus overt naming.

Manipulations of neighborhood size have also produced different patterns of activation in different studies. Okada and Hickok (2006) found sensitivity to neighborhood size limited to bilateral pSTS in a passive listening task, while Prabhakaran et al. (2006) found neighborhood effects in the left SMG, caudate and parahippocampal region in their auditory lexical decision task. In this case, the differences may be related to the differing attentional demands of passive listening versus lexical decision. In a study employing a selective attention manipulation during bimodal language processing, Sabri et al. (2008) found that while superior temporal regions were activated in all speech conditions, differential activation associated with lexical manipulations (word-pseudoword subtraction) was only found when subjects attended to speech. This suggests that tasks such as passive listening that require only shallow processing may fail to produce robust activation outside of superior temporal cortex.

To summarize, the complex and often contradictory results seen in the BOLD imaging literature do not provide a simple resolution to the localization problem, but they do delineate a number of issues that any satisfying resolution must address. Claims about the localization of the lexicon must be framed in relation to a general understanding of the nature of lexical representation that specifically addresses the relationship between the representation of words, pseudowords and sublexical representations, and the causes of task effects.

Recent behavioral results and advances in the characterization of neural processing streams associated with spoken language processing suggest that some task effects may be attributable to a fundamental distinction between semantic and articulatory phonological processes. In one line of experimentation, researchers have found that listeners show different patterns of behavioral effects when presented with the same set of spoken word stimuli in similar tasks that tap phonological versus semantic aspects of word knowledge. Gaskell and Marslen-Wilson (2002) showed that gated primes (e.g. captain presented as /kæpt/ or /kaæptI/) produce significant phonological priming for complete words (CAPTAIN), but no priming and no effect of degree of overlap for strong semantic associates (e.g. COMMANDER). Norris, Cutler, McQueen, and Butterfield (2006) found several similar differences between phonological and semantic cross-modal priming. They found both associative (date – TIME) and identity (date – DATE) priming when spoken primes were presented in isolation, but only identity priming when they were presented in sentences. In instances in which a short wordform in embedded in a longer wordform (e.g. date in sedate) no associative priming was found for embedded words (sedate-TIME), but negative form priming (sedate-DATE) was found in sentential contexts. Together, these results demonstrate the dissociability of semantic and phonological modes of lexical processing in the perception of spoken words.

Gaskell and Marslen-Wilson (1997) explored the idea that semantic and phonological aspects of spoken word processing may be independent of each other in their distributed cohort model. Unlike earlier models (c.f. McClelland & Elman, 1986) that assumed that lexical access is the result of an ordered mapping from acoustic–phonetic representation to phonological and then semantic representation, their model employed direct simultaneous parallel mapping processes between low-level sensory representations and distributed semantic and phonological representations.2 In their work, the decision to represent lexical semantics and phonology as separate outputs was motivated in part by computational considerations. Parallel architecture offers potentially faster access to semantic representations. This general organization also allows for the development of intermediate representations that are optimally suited for the mapping between a common input representation and different output representations.

The parallel mapping between low-level phonetic representations of speech and semantic versus phonological representation proposed by Gaskell and Marslen-Wilson is similar to the form of modern dual-pathway models of spoken language processing that draw on the pathology, functional imaging and psychological literatures and postulate separate routes from auditory processing to semantics and speech production (Hickok and Poeppel, 2000, Hickok and Poeppel, 2004, Hickok and Poeppel, 2007, Rauschecker and Scott, 2009, Scott, 2005, Scott and Wise, 2004, Warren et al., 2005, Wise, 2003). In these models auditory input representations are initially processed in primary auditory cortex, with higher-level auditory and acoustic–phonetic processing taking place in adjacent superior temporal structures. As in Gaskell and Marslen-Wilson’s model, subsequent mappings are carried out in simultaneous parallel processing streams. In the neural models these include a dorsal pathway that provides a mapping between sound and articulation, and a ventral pathway that maps from sound to meaning.

In the model developed by Scott and colleagues (Rauschecker and Scott, 2009, Scott, 2005, Scott and Wise, 2004), the left ventral pathway links primary auditory cortex to the lateral STG and then the anterior STS (aSTS). No ventral lexicon is proposed in these models. In the Hickok and Poeppel, 2000, Hickok and Poeppel, 2004, Hickok and Poeppel, 2007, the mapping between sound and meaning is mediated by a lexical interface located in the posterior middle temporal gyrus (pMTG) and adjacent cortices. This interface is the most explicit description of a lexicon in any of the dual stream models.

Parallels between the distributed model’s phonological output and the articulatory dorsal processing stream in dual stream models are less clear. One critical question is whether articulatory and phonological representations are the same thing. While phonological representation is historically rooted in articulatory description (Chomsky & Halle, 1968), current theories of featural representation include both explicitly articulatory (c.f. Browman & Goldstein, 1992) and purely abstract systems (c.f. Hale & Ross, 2008). The lexical representations used in Gaskell and Marslen-Wilson’s model do not make a clear commitment to articulatory or non-articulatory representation.

In summary, despite widespread evidence that words play a central role in language processing, over a century of research has produced no clear consensus on where or how words are represented in the brain. This may be attributed to a number of factors including the methodological challenges inherent in discriminating between lexical activation, processes that follow on lexical activation, and the application of lexical processes to pseudoword stimuli. During the same period, evidence from dissociations in unimpaired and aphasic behavioral processing measures have pointed towards a potential dissociation between semantic and phonological or articulatory aspects of lexical processing that roughly parallels distinctions made in recent dual stream models of spoken language processing in the human brain. In the sections that follow I will develop a framework for understanding the organization and function of lexical representations and review evidence from a variety of disciplines that suggests the existence of parallel lexica in the ventral and dorsal language processing streams.

Section snippets

The computational significance of words

The lexicon been hard to localize in part because of a lack of agreement about its function. Researchers have adopted the term “lexicon” to describe the specific role that lexical knowledge plays in a variety of aspects of processing. As a result, the term has different meanings to different research communities. Syntacticians describe it as a store of grammatical knowledge (Bresnan, 2001, Jackendoff, 2002), morphologists see it as an interface between sound and meaning (Ullman et al., 2005),

Distributed versus local representation

This section will examine the question of how lexical representation might be instantiated and identified in behavioral or neural data. In many models of lexical access words are assumed to have local representation (c.f. Marslen-Wilson, 1987, McClelland and Elman, 1986, Morton, 1969, Norris, 1994) in which each word is represented by a single discrete node or entry. This type of representation is transparently and unequivocally lexical. In contrast, many connectionist models of spoken and

Overview of the dual lexicon model

The dual lexicon model works within the broader context of dual pathway models of spoken language process. The anatomical organization of left hemisphere components of this bilateral model are shown in Fig. 1. In the ventral pathway, a lexicon located in pMTG and adjacent pITS mediates the mapping between words and meaning. This area is not a store of semantic knowledge, but instead houses morphologically organized representations of word forms. These representations link the acoustic phonetic

The ventral lexicon

Hickok and Poeppel, 2004, Hickok and Poeppel, 2007 identify a region comprising pMTG and adjacent pITS that projects directly to a widely distributed semantic network and acts as a lexical interface between sound and meaning in the ventral pathway. This is clearly a lexicon within the current framework. In contrast, Scott and Wise’s dual stream model (2004) focuses on prelexical processes, and does not identify a comparable structure. In their model, the “what” pathway links low level auditory

The dorsal lexicon

A broad convergence of evidence suggests that the supramarginal gyrus (SMG) serves as a dorsal stream lexicon, playing a role in speech production and perception as well as articulatory working memory rehearsal. The notion that speech production and perception share a common lexicon is a matter of some debate, with prominent psycholinguistic models arguing for separate input and output lexica (Dell et al., 1997, Levelt et al., 1999), and models motivated by neuropsychological and functional

The function of the STG

The dual lexicon model is an attempt to consolidate new data with our evolving understanding of the role of lexical representation in language processing. This section briefly discusses the role of the posterior superior temporal cortex, the original wortshatz, in the context of the dual lexicon framework.

Wernicke’s model focuses on the role of left posterior superior temporal cortex. More recent work supports the importance of this region in spoken language processing, but suggests that pSTG

Summary

The dual lexicon model provides a framework for integrating a broad and diverse set of empirical results and computational considerations. It unites observations from aphasia, behavioral psycholinguistic paradigms, laboratory and theoretical phonology, BOLD activation in normals, electrophysiology, functional, anatomical and effective connectivity, and histology. The model extends current dual stream models of language processing. It also provides a framework for understanding the role of

Acknowledgments

I would like to thank David Caplan, Catherine Stoodley, and Joshua Levy for their feedback during the preparation of this manuscript, and Matt Davis and Greg Hickok for their thoughtful reviews of an earlier version of this manuscript. This work was supported by the National Institute of Deafness and Communicative Disorders (R01 DC003108). I have no conflicts of interest to declare.

References (200)

  • A.R. Damasio et al.

    Determinants of performance in color anomia

    Brain and Language

    (1979)
  • G. Dehaene-Lambertz et al.

    Neural correlates of switching from auditory to speech perception

    NeuroImage

    (2005)
  • M. Dennis

    Dissociated naming and locating of body parts after left anterior temporal lobe resection: An experimental case study

    Brain and Language

    (1976)
  • J.L. Elman et al.

    Cognitive penetration of the mechanisms of perception: Compensation for coarticulation of lexically restored phonemes

    Journal of Memory and Language

    (1988)
  • D.J. Foss et al.

    On the psychological reality of the phoneme: Perception, identification, and consciousness

    Journal of Verbal Memory and Verbal Behavior

    (1973)
  • S.A. Frisch et al.

    Wordlikeness: Effects of segmental probability and length on the processing of nonwords

    Journal of Memory and Language

    (2000)
  • G. Gainotti et al.

    Anomia with and without lexical comprehension disorders

    Brain and Language

    (1986)
  • M.G. Gaskell et al.

    Lexical competition and the acquisition of novel words

    Cognition

    (2003)
  • S.P. Gennari et al.

    Context-dependent interpretation of words: Evidence for interactive neural processes

    NeuroImage

    (2007)
  • S.D. Goldinger et al.

    Puzzle-solving science. The quixotic quest for units in speech perception

    Journal of Phonetics

    (2003)
  • M. Goldrick et al.

    Lexical and post-lexical phonological representations in spoken production

    Cognition

    (2007)
  • D.W. Gow et al.

    Articulatory mediation of speech perception: A causal analysis of multi-modal imaging data

    Cognition

    (2009)
  • D.W. Gow et al.

    Lexical influences on speech perception: A Granger causality analysis of MEG and EEG source estimates

    NeuroImage

    (2008)
  • Y. Grodzinsky et al.

    Neuroimaging of syntax and syntactic processing

    Current Opinion ion Neurobiology

    (2006)
  • R.N. Henson

    Neuroimaging studies of priming

    Progress in Neurobiology

    (2003)
  • G. Hickok et al.

    Towards a functional neuroanatomy of speech perception

    Trends in Cognitive Science

    (2000)
  • G. Hickok et al.

    Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language

    Cognition

    (2004)
  • S.A. Kotz et al.

    Modulation of the lexical semantic network by auditory semantic-priming: An event-related functional MRI study

    NeuroImage

    (2002)
  • D.A. Allport

    Speech production and comprehension: One lexicon or two?

  • M. Baese-Berk et al.

    Mechanisms of interaction in speech production

    Language and Cognitive Processes

    (2009)
  • A. Basso et al.

    Phonemic identification defects in aphasia

    Cortex

    (1977)
  • A. Bell et al.

    Effects of disfluencies, predictability, and utterance position on word form variation in English conversational speech

    Journal of the Acoustical Society of America

    (2003)
  • P.S.F. Bellgowan et al.

    Understanding neural system dynamics through task modulation and measurement of functional MRI amplitude, latency, and width

    Proceedings of the National Academy of Sciences of the United States of America

    (2003)
  • A. Berreta et al.

    An ER-fMRI investigation of morphological inflection in German reveals that the brain makes a distinction between regular and irregular forms

    Brain and Language

    (2003)
  • J. Binder et al.

    Human temporal lobe activation by speech and nonspeech sounds

    Cerebral Cortex

    (2000)
  • S.E. Blumstein et al.

    The perception of voice onset time: An fMRI investigation of phonetic category structure

    Journal of Cognitive Neuroscience

    (2005)
  • D. Boatman et al.

    Transcortical sensory aphasia: Revisited and revised

    Brain

    (2000)
  • J. Bresnan

    Lexical-functional syntax

    (2001)
  • C.P. Browman et al.

    Articulatory phonology: An overview

    Phonetica

    (1992)
  • B. Buchsbaum et al.

    Role of left posterior superior temporal cortex in auditory sentence comprehension: An fMRI study

    NeuroReport

    (2001)
  • M.W. Burton et al.

    The role of segmentation in phonological processing: An fMRI investigation

    Journal of Cognitive Neuroscience

    (2000)
  • Buschbaum, B. R., Baldo, J., Okada, K., Berman, K. F., Dronkers, N., D’Esposito, M., & Hickok, G. (in press)....
  • D. Caplan

    Functional neuroimaging studies of syntactic processing in sentence comprehension: A selective critical review

    Language and Linguistics Compass

    (2007)
  • D. Caplan et al.

    Analysis of lesions by MRI in stroke patients with acoustic–phonetic processing deficits

    Neurology

    (1995)
  • D. Caplan et al.

    Selective acoustic phonetic impairment and lexical access in an aphasic patient

    Journal of the Acoustical Society of America

    (1994)
  • D. Caplan et al.

    A case study of reproduction conduction aphasia. I. Word production

    Cognitive Neuropsychology

    (1986)
  • A. Caramazza et al.

    The role of the (output) phonological buffer in reading, writing and repetition

    Cognitive Neuropsychology

    (1986)
  • M. Catani et al.

    Perisylvian language networks of the human brain

    Annals of Neurology

    (2005)
  • E.F. Chang et al.

    Categorical speech representation in human superior temporal gyrus

    Nature Neuroscience

    (2010)
  • C.-C. Chen et al.

    Ikorovere Makua tonology (part 1)

    Studies in Linguistic Sciences

    (1979)
  • Cited by (131)

    • Neural evidence suggests phonological acceptability judgments reflect similarity, not constraint evaluation

      2023, Cognition
      Citation Excerpt :

      Moreover, MEG source reconstructions commonly show a bias towards superficial sources that may make some sulcal sources appear more gyral. Consistent with the models of both Hickok and Poeppel (2007) and Gow (2012), these posterior ROIs may mediate the mapping between wordform structure and meaning. Both models attribute more anterior parts of the middle temporal gyrus and inferior temporal gyrus to semantic representation.

    • The bilingual structural connectome: Dual-language experiential factors modulate distinct cerebral networks

      2021, Brain and Language
      Citation Excerpt :

      Moreover, the Adaptive Control hypothesis (Green & Abutalebi, 2013) includes these implicated areas in a network of regions that sustain context-dependent language selection and monitor speech production in bilinguals. While parietal regions have been suggested to be relevant in sensory-motor integration, allowing the detection and correction of speech errors (Buchsbaum et al., 2011; Hickok & Poeppel, 2016) as well as the storing of lexico-semantics representations (see the “Dual Lexicon” model by Gow, 2012; Gold, Powell, Xuan, Jiang, & Hardy, 2007), here we limit ourselves to highlighting their role for language selection and maintenance. Based on these considerations, we suggest that L2 exposure modulates the connectivity within a set of regions in “Subnetwork I”, which sustain aspects of speech comprehension via the ventral stream and language selection through the dorsal stream.

    View all citing articles on Scopus
    View full text