Elsevier

Brain and Language

Volume 89, Issue 3, June 2004, Pages 417-432
Brain and Language

Unitary vs multiple semantics: PET studies of word and picture processing

https://doi.org/10.1016/j.bandl.2004.01.010Get rights and content

Abstract

In this paper we examine a central issue in cognitive neuroscience: are there separate conceptual representations associated with different input modalities (e.g., Paivio, 1971, Paivio, 1986; Warrington & Shallice, 1984) or do inputs from different modalities converge on to the same set of representations (e.g., Caramazza, Hillis, Rapp, & Romani, 1990; Lambon Ralph, Graham, Patterson, & Hodges, 1999; Rapp, Hillis, & Caramazza, 1993)? We present an analysis of four PET studies (three semantic categorisation tasks and one lexical decision task), two of which employ words as stimuli and two of which employ pictures. Using conjunction analyses, we found robust semantic activation, common to both input modalities in anterior and medial aspects of the left fusiform gyrus, left parahippocampal and perirhinal cortices, and left inferior frontal gyrus (BA 47). There were modality-specific activations in both temporal poles (words) and occipitotemporal cortices (pictures). We propose that the temporal poles are involved in processing both words and pictures, but their engagement might be primarily determined by the level of specificity at which an object is processed. Activation in posterior temporal regions associated with picture processing most likely reflects intermediate, pre-semantic stages of visual processing. Our data are most consistent with a hierarchically structured, unitary system of semantic representations for both verbal and visual modalities, subserved by anterior regions of the inferior temporal cortex.

Introduction

Conceptual knowledge lies at the heart of the cognitive system, supporting a wealth of mental processes, including language comprehension and production, reasoning, and object recognition. This paper addresses a central issue concerning the functional and neural architecture of the conceptual system: are all these processes subserved by a unitary system of conceptual representations, or are there separate representations for the same concept for different modalities of input or output? This issue has a long history in cognitive psychology and neuropsychology, but no consensus has yet emerged. Some researchers have argued for a unitary semantics account, proposing that there are distinct conceptual representations for the verbal (word) and visual (object) input modalities (e.g., Paivio, 1971, Paivio, 1991; Shallice, 1988; Warrington & Shallice, 1984). Others have rejected this position, claiming that all processing routes converge on a single set of conceptual representations common to both modalities (e.g., Caramazza, Hillis, Rapp, & Romani, 1990).

It is important to note at the outset that many of the most influential neuropsychological studies that have been presented as evidence for modality-specific semantics have employed visually presented words and visually presented pictures (e.g., McCarthy and Warrington, 1986, McCarthy and Warrington, 1988; Shallice, 1988, Shallice, 1993; Warrington & McCarthy, 1994). Thus, in one sense, such investigations do not test whether or not the semantic system is “amodal” because both inputs are visual. Furthermore, “verbal” does not correspond in a straightforward manner to a sensory modality at all, but reflects some combination of content, context, and format (Plaut, 2002). Despite such criticisms, a large number of functional neuroimaging studies which have addressed the unitary vs multiple semantics controversy have also employed visually presented pictures and words (see below). The result is that, unlike visual vs auditory presentation, the visual presentation of pictures and words does not constitute an orthogonal comparison. It follows that the term “modality” might more accurately be replaced with “material,” because the comparison in these cases is not of sensory systems of input (auditory, visual, and tactile) but of objects (or pictures of them) vs printed words. However, for the value of consistency with previous literature, we continue to adopt the term “input modality” in our considerations of whether the conceptual knowledge accessed by pictures and words form two neurally distinct components of the semantic system (modality-specific) or whether both stimulus types converge on to the same set of representations (unitary semantics).

The notion of modality-specific conceptual representation stems predominantly from Paivio's Dual Coding Theory. On this account, human cognition has developed to deal simultaneously with verbal and non-verbal objects and events, giving rise to two independent but interconnected symbolic systems. “One (the image system) is specialised for dealing with perceptual information concerning non-verbal objects and events. The other (the verbal system) is specialised for dealing with linguistic events” (Paivio, 1971, p. 379). The verbal system deals directly with linguistic input while also serving a symbolic function with respect to non-verbal input, which has direct access only to a non-verbal (or imagery) system. The two systems are assumed to be functionally and structurally distinct although interconnected by referential relations between representations in the two.

Neuropsychological evidence for separate visual and verbal semantic systems was first provided by Warrington (1975), in a study of two visual agnosic patients, one of whom (EM) appeared to be able to recognise a visually presented object, but not its name, while the other (AB) was able to recognise the name of an object but not its visual representation. Further cases of related dissociations in picture vs word based semantic judgements have supported the notion of neurally distinct, modality-specific semantic representations (e.g., Ferreira, Giusiano, Ceccaldi, & Poncet, 1997; Warrington & McCarthy, 1994; Warrington & Shallice, 1984). A strong version of the multiple, modality-specific semantics position implies the existence of distinct, self-contained semantic systems duplicated across modalities. However, most proponents propose that the systems are in constant communication. For example, Shallice (1988) favours the notion of distributed but interconnected modality-specific subsystems, each of which can only be accessed directly from the associated input modality.

A recent study by Saffran, Coslett, Martin, and Boronat (2003) presents data from a patient with progressive fluent aphasia (BA) who showed a clear performance deficit for words relative to pictures on a variety of semantic tasks. They interpret these findings as supporting a distributed semantic system, with knowledge distributed across distinct and neurally separable subsystems (e.g., colour is represented in the visual system, sounds in the auditory system, knowledge of object use in the kinaesthetic system, etc.). Thus, the impairment for verbal stimuli shown by BA reflected damage to a single component of the semantic system (the `propositional/encyclopaedic' store). The type of knowledge tapped by a task is assumed to reflect the input route primarily involved in its acquisition. Pictures of objects initially access structural descriptions, and only subsequently engage motor, sensory, and propositional/encyclopaedic information. The authors argue that the semantic system is multimodal, with different types of information stored in different brain regions and in different representational formats. However, Lambon Ralph and Howard (2000), describing a semantic dementia patient (IW) with a similar performance profile, interpreted the disproportionate semantic impairment for words with respect to the theory of `privileged accessibility' to a unitary semantic system (Caramazza et al., 1990) (see below).

Other researchers have presented evidence that semantic processes are lateralised, with only the left hemisphere supporting naming (e.g., Coslett & Saffran, 1992; Luzzatti, Rumiati, & Ghirardi, 1998). Patients with semantic impairments resulting from left-sided damage have relatively preserved comprehension of visually presented objects and their functions, but they are unable to name them. Thus, in the context of the multiple semantics account, the left hemisphere supports verbal semantics and the right hemisphere supports visual semantics. However, problematic for this account is the observation that semantic access and retrieval from visual input in optic aphasia (typically caused by lesions to left medial occipital cortex) is not as preserved as originally claimed (e.g., Hillis & Caramazza, 1995; Riddoch & Humphreys, 1987). This suggests that a distinction between right and left hemisphere processing on the basis verbal vs visual semantics may not be as straightforward as claimed by these authors.

A persistent difficulty for studies which propose modality- or material-specific semantics is to demonstrate unequivocally that the observed dissociation is, in fact, located at the level of conceptual representation, rather than within the pre-semantic representations or processes necessary for access to the conceptual system. Verbal and non-verbal tasks are not always comparable in terms of complexity or other relevant factors, which may have led to dissociations more apparent than real (e.g., Caramazza et al., 1990; Riddoch, Humphreys, Coltheart, & Funnell, 1988). Unless each task unequivocally taps into conceptual representations, modality-specific effects may emerge because of impaired pre-semantic representations (e.g., visual-structural rather than visual-semantic).

Caramazza et al. (1990) claims that all modality-specific semantic deficits reported in the literature can be implemented within the architecture of unitary semantics theory, and argues that a common conceptual system will be recruited during the processing of an item, irrespective of modality, while allowing for an “asymmetry” in recruitment according to stimulus modality. Their organised unitary conceptual hypothesis (OUCH) assumes that perceptually salient features of a visually presented object will have privileged access to its semantic representation even though there is a single, undifferentiated representation for the visually presented object and the word for that object. Basic to this theory is that for any item, particular predicates are more important than others in defining its meaning. Thus, while the semantic representation of a visually presented object and its verbal description is the same, the procedure by which that representation is accessed may differ. A visually or aurally presented word will activate the lexicon, which will, in turn, activate the semantic properties which define its meaning, but a visually presented object will directly access those same semantic properties. These assumptions, of privileged accessibility and bias towards particular subcomponents of a semantic representation, are used to explain how modality-specific semantic effects can arise from damage to a unitary conceptual system.

Lambon Ralph, Graham, Patterson, and Hodges (1999) examined definitions provided by nine patients with semantic dementia to concepts presented in spoken word or picture form. They found that the extent of conceptual knowledge successfully tapped by verbal input (the concept label) closely predicted the quality of conceptual information produced following pictorial input, consistent with both inputs tapping a common semantic system. Although two patients were particularly poor in defining concepts relating to picture presentations, the authors claimed that this apparent modality effect was caused by additional pre-semantic visuo-perceptual deficits in these patients.

Thus, although neuropsychological dissociations have often been taken as evidence for modality-specific conceptual systems, the data are open to alternative explanations consistent with a unitary modality-independent system. Nevertheless, the unitary semantics model is not able to fully explain the size of the dissociations reported in task performance, such as the relative preservation of gesturing meaning relative to naming visually presented objects across all levels of severity among optic aphasia patients (Plaut, 2002). Recently, some authors have suggested a `middle ground' on the unitary vs multiple semantics debate. For example, Plaut (2002) presents a computational account of optic aphasia which is consistent with the same semantic representation activated regardless of the mode of presentation, but some parts of the system becoming relatively specialised for particular types of input over others (leading to more selective deficits than could otherwise be observed).

Neuroimaging techniques offer the capacity to explore the functional neuroanatomy of the conceptual system, and, unlike the lesion-deficit approach, can directly address the extent to which neural activity in the healthy brain is differentiated as a function of the form of input. Furthermore, functional neuroimaging provides a systems-level approach whereby the distributed pattern of recruitment across the brain can be explored.

The first direct comparison of word and picture processing was undertaken by Vandenberghe, Price, Wise, Josephs, and Frackowiak (1996) using PET, the results of which have been interpreted as favouring a unitary conceptual system, undifferentiated by modality. However, a number of significant material-specific effects were in fact observed in this study. Healthy subjects carried out three match-to-sample tasks on triplets of either words or pictures. The tasks were based on (i) meaning (termed associative semantics), (ii) real-life size (visual semantics), or (iii) physical size (baseline). In both semantic tasks, a large area of left hemisphere activation, common to pictures and written words, was found which extended from superior occipital gyrus through middle, inferior temporal, and fusiform gyrus to inferior frontal gyrus. No differences between word and picture processing were found as a function of semantic task, and there were no modality-specific semantic effects in the right hemisphere. Modality-specific activations which occurred across all tasks (including baseline) were taken to represent pre-semantic processing effects. For pictures relative to words (irrespective of task), there was activation of right middle occipital gyrus (BA 19/37). The reverse contrast (words relative to pictures) produced activation of left inferior parietal lobe (BA 40). At face value, the results favoured an extensive distributed conceptual system underlying both words and pictures. Nevertheless, in addition to pre-semantic modality effects, there was also significant word-specific semantic recruitment for verbal materials in the left hemisphere (discussed below).

Further evidence for modality-independent conceptual activation comes from a PET study by Moore and Price (1999) in which the left fusiform gyrus (BA 20/37) was recruited during the processing of meaningful over non-meaningful stimuli, irrespective of input modality (using silent naming and viewing of words and objects). Other regions of common activation in this study included right inferior frontal gyrus (BA 47), and anterior cingulate (BA 24/6), most probably reflecting the greater overall demands of the semantic over non-semantic task conditions. A meta-analysis by Devlin et al. (2002a) on a set of seven PET studies (including Moore & Price, 1999) also found that the same network was activated by words and pictures, although the specific regions of common recruitment were not described.

In addition to the neuroimaging studies that have directly contrasted conceptual activation for words vs pictures, a number have compared conceptual activations for a range of other modality contrasts. Buchel, Price, and Friston (1998) compared cortical activations in congenitally blind, late blind, and sighted subjects in a PET study during visual or tactile reading. All subject groups showed activation in overlapping regions of the left fusiform (BA 37) for visual and tactile reading for meaningful words relative to non-word letter strings. The results were consistent with this region being unaffected by variations in sensory input characteristics and linking converging inputs across different modalities, giving rise to conceptual representations. Other studies have also highlighted at least partially overlapping regions of the left fusiform gyrus across different modalities (e.g., generating mental images of object words relative to listening to abstract words (D'Esposito et al., 1997); making semantic relative to phonological decisions on auditorially presented words (Demonet et al., 1992); naming words and objects relative to letters and colours (Price, Moore, Humphreys, Frackowiak, & Friston, 1996); semantic classification of words characterised by auditory, visual or non-sensory features (Noppeney & Price, 2002)).

One of the most influential bodies of recent research on the structure and organisation of semantic memory has been developed by Martin and colleagues (Martin, 2001; Martin, Haxby, Lalonde, Wiggs, & Ungerleider, 1995; Martin, Ungerleider, & Haxby, 2000). On the basis of a series of neuroimaging investigations, these authors propose that qualitatively different forms of information are represented in lateral and ventral temporal cortices and that the location of these sites appears to parallel the organisation of the sensory and motor systems, a view also reflected in the recent neuropsychological work, for example by Saffran et al. (2003), described above. In an fMRI study which examined activations during viewing, match-to-sample, and naming of pictures of animals and tools they found distinct activations for these two categories of object (Chao, Haxby, & Martin, 1999). However, the same patterns of activity were observed when subjects read silently and answered questions about the written names of animals and tools, suggesting the semantic processing of words and pictures engage a common set of representations.

The inferior frontal gyrus (IFG) has also been consistently activated, irrespective of the modality of input in several neuroimaging studies (e.g., Thompson-Schill, Aguirre, D'Esposito, & Farah, 1999; Wagner, Desmond, Demb, Glover, & Gabrieli, 1997). However, it is unclear whether IFG activation is associated with conceptual representation or processing per se or whether it relates to more general differences in the overall cognitive demands between test and baseline conditions. There is no evidence that patients with damage restricted to IFG have semantic deficits. However, some studies which have explicitly attempted to match semantic and non-semantic conditions on overall “difficulty” have reproduced robust IFG activations during semantic relative to non-semantic conditions (e.g., Demb et al., 1995).

While several studies have been interpreted as consistent with a unitary conceptual system, as outlined above (e.g., Vandenberghe et al., 1996), all of these studies either (i) show some modality-specific effects in addition to the areas of common activation or (ii) present insufficient information to exclude such effects. Consideration of semantic modality effects requires an important distinction to be made between modality-specific activation of conceptual knowledge and modality-specific activations associated with earlier stages of input processing.

Two posterior regions have been identified as candidates for early, pre-semantic stages of input processing. First, the lateral occipital complex is preferentially activated by pictures of objects with clear shape interpretations relative to non-identifiable visual textures or noise patterns (e.g., Kanwisher, Chun, McDermott, & Ledden, 1996; Malach et al., 1995), a finding which is consistent with the deficits associated with damage to this region (e.g., Farah, Hammond, Mehta, & Ratcliff, 1989; Feinberg, Dyckes-Berke, Miner, & Roane, 1995). Malach et al. (1995) found no evidence for differential activation in the lateral occipital complex as a function of familiarity (pictures of real-life objects vs degraded, non-identifiable objects), suggesting that the region is not involved in semantic stages of representation. In a recent literature review, Grill-Spector, Kourtzi, and Kanwisher (2001) conclude that lateral occipital complex functions as a general-purpose system for the analysis of object shape and is not associated with a conceptual level of representation. They propose that it is hierarchically organised, with sensitivity to the local features of an object in more posterior, retinoptic areas, and with more global representations (whole or half objects) associated with activations in anterior-lateral areas.

A second region, located in the middle portion of the left fusiform gyrus (BA 37) and called the Visual Word Form Area (VWFA) responds maximally to visually presented words independently of their location on the retina (Cohen et al., 2000). This lies posteriorly to those regions of the fusiform gyrus which are associated with high over low semantic processing demands (Cohen et al., 2000). In contrast, within the VWFA, recruitment seems to be associated with a distinction between alphabetic material (e.g., real words or consonant strings) and non-alphabetic stimuli (e.g., false fonts or fixation) but not related to semantics. Furthermore, VWFA recruitment does not seem to be differentiated as a function of whether the stimuli are real words or pseudowords (e.g., Dehaene, Le Clec, Poline, Le Bihan, & Cohen, 2002; Fiez & Petersen, 1998), although a relative reduction in activation is observed for consonant strings (Cohen et al., 2002) consistent with a sensitivity of this region to orthographic regularities. Thus, the data are consistent with functional differentiation along the extent of the fusiform gyrus, with higher level, semantic representation in anterior regions, and pre-semantic form and orthography-based representation in more posterior regions.

Once activation of these pre-semantic and/or intermediate level input regions is excluded from consideration, there is limited support for modality-specific semantic recruitment. In the Vandenberghe et al. (1996) study, word-specific semantic activations were found in left superior temporal sulcus, left anterior middle temporal gyrus, and left inferior frontal sulcus. The only picture-specific semantic activation was observed in left posterior inferior temporal sulcus. Consistent with the Vandenberghe study, Moore and Price (1999) found semantic activation for words over pictures in the left superior temporal gyrus (BA 22/41), extending to include the supramarginal gyrus, although this finding may have related more to differing phonological requirements among the tasks than to modality-specific semantic recruitment per se (e.g., Demonet, Price, Wise, & Frackowiak, 1994; Moore & Price, 1999). Picture-specific activation was found in ventral occipitotemporal cortices bilaterally (BA 19) including lateral occipital cortex, probably relating to pre-semantic object processing.

The overall picture that has emerged from the neuroimaging literature remains unclear with respect to input modality effects. Although it is relatively well established that the cortical recruitment for words and pictures differs in more posterior, pre-semantic processing areas (perhaps reflecting the distinction between visual form and orthography-based representations), whether there is modality-specificity at the level of semantic representations remains poorly understood. In this study, we directly address the question of whether there are distinct (separable) neural regions that underlie the semantic representation of objects and words by comparing brain activations associated with each type of material. If activation of anterior temporal regions associated with semantic representation is essentially undifferentiated by the modality of input (objects or words), it will be most consistent with unitary semantics theory (Caramazza, 2000). Conversely, a finding that recruitment in semantic regions differs according to the type of material presented would be more consistent with Dual Coding Theory (Paivio, 1971, Paivio, 1986) and multiple semantics theory described by Warrington, Shallice and others (e.g., Shallice, 1988; Warrington & Shallice, 1984).

We present a meta-analysis of four PET studies (three semantic categorisation tasks and one lexical decision task), two using pictures as stimuli and two using words. Although in comparison to semantic categorisation, lexical decision is likely to place weaker demands on the semantic system, the task has been shown to robustly recruit regions of the brain involved in semantic processing (see Bookheimer, 2002 for a recent review). In a comparison of a lexical decision and a semantic categorisation task (using words), we found additional frontal and cerebellar recruitment during the latter (presumably relating to working memory, attentional, and semantic demands not present during lexical decision). However, a direct comparison of the two tasks revealed no significant differences in the pattern or extent of recruitment (Devlin et al., 2002b). On this basis, we chose to include the lexical decision task. Methodologies and procedures, stimulus sets, and scanner settings were largely held constant across all tasks. In taking this approach we believe we can explore and compare regional recruitment for word and picture processing with a high level of conviction that differential activations relate to differences in stimulus-specific processing rather than to methodological or task differences per se. Although it might be argued that, by restricting our study to categorisation or lexical decision conditions, we compromise our capacity for exploring the neural correlates of conceptual processing at different levels or types of demand, we know that even subtle methodological differences in a task paradigm can have wide ranging effects on regional activations. Consequently, we have placed our emphasis on examining modality differences in words and pictures within a relatively unified task structure. However, we should note that, although words, by definition, evoke naming responses we do not know whether names are automatically evoked for pictures. This is an important issue which we will return to in the discussion.

There are two competing predictions regarding the pattern of recruitment during conceptual processing of words and pictures: (a) if conceptual representations for words and pictures are separable and non-overlapping, we would expect them to involve distinct semantic processing regions; (b) if the unitary conceptual system position is correct, there should be extensive coactivation for word and picture categorisation in the more anterior, semantic regions, although there may be differential activations in posterior areas relating to modality-specific pre-semantic effects.

Section snippets

Subjects

A total of 38 subjects (mean age=30 years; range =21–48 years; 37M, 1F) were tested across the four tasks described in this study. All were right-handed, native English speakers without any known history of neurological or psychiatric illness. No subject participated in more than one task. Each gave informed consent and was medically screened for PET prior to entering the scanning room.

Words 1: Lexical decision (Devlin et al., 2002b)

Twelve participants (mean age 32, range 21–44 years) performed a lexical decision task on visually presented

Semantic activations

To determine the semantic network activated by words and pictures, we entered all semantic conditions against baseline for each study in a conjunction analysis. Table 1 presents cluster extents and voxel-level statistics. In the left hemisphere, the largest cluster encompassed inferior frontal gyrus (BA 47, extending superiorly to BA 45) and anterior portions of the insula and temporal pole (BA 38). A second cluster encompassed the length of the fusiform gyrus (BA 20/36), extending to include

Discussion

In this study we found that both words and pictures robustly activated a common region of the left fusiform gyrus (BA 36, anterior BA 37), left inferior frontal gyrus (BA 47) and the most anterior aspect of the left temporal pole (BA 38) during semantic task conditions relative to a low-level baseline. A conjunction of the four tasks also produced right hemisphere activation of inferior frontal gyrus (BA 47) and cerebellum, although when we inclusively masked words with pictures, no right-sided

Acknowledgements

This research was supported by an MRC programme grant to L.K.T.

References (89)

  • C.T Ferreira et al.

    Optic aphasia: Evidence of the contribution of different neural systems to object and action naming

    Cortex

    (1997)
  • K.J Friston et al.

    Multisubject fMRI studies and conjunction analyses

    NeuroImage

    (1999)
  • T.J Grabowski et al.

    Effects of gender on blood flow correlates of naming concrete entities

    NeuroImage

    (2003)
  • K Grill-Spector et al.

    The lateral occipital complex and its role in object recognition

    Vision Research

    (2001)
  • N Kanwisher et al.

    Functional imagining of human visual recognition

    Cognitive Brain Research

    (1996)
  • M.A Lambon Ralph et al.

    Is a picture worth a thousand words? Evidence from concept definitions by patients with semantic dementia

    Brain and Language

    (1999)
  • A.C.H Lee et al.

    “Pray or Prey?” Dissociation of semantic memory retrieval from episodic memory processes using photon emission tomography and a novel homophone task

    NeuroImage

    (2002)
  • C.J Moore et al.

    Three distinct ventral occipitotemporal regions for reading and object naming

    Neuroimage

    (1999)
  • U Noppeney et al.

    Retrieval of visual, auditory, and abstract semantics

    Neuroimage

    (2002)
  • C.J Price et al.

    Cognitive conjunction: A new approach to brain activation experiments

    Neuroimage

    (1997)
  • S.L Rossell et al.

    Sex differences in functional brain activation during a lexical visual field task

    Brain and Language

    (2002)
  • S.L Thompson-Schill

    Neuroimaging studies of semantic memory: Inferring “how” from “where”

    Neuropsychologia

    (2003)
  • S.L Thompson-Schill et al.

    A neural basis for category and modality specificity of semantic knowledge

    Neuropsychologia

    (1999)
  • L.K Tyler et al.

    Towards a distributed account of conceptual knowledge

    Trends in Cognitive Science

    (2001)
  • E.K Warrington et al.

    Multiple meaning systems in the brain: A case for visual semantics

    Neuropsychologia

    (1994)
  • Baayen, R., & Pipenbrook, R. (1995). The Celex Lexical Database. Philadelphia Linguistic Data Consortium. Philadelphia:...
  • M.F Beauvois

    Optic aphasia: A process of interaction between vision and language

    Philosophical Transactions of the Royal Society of London. Series B. Biological Sciences

    (1982)
  • B.M Bly et al.

    Functional anatomy of object recognition in humans: Evidence from positron emission tomography and functional magnetic resonance imaging

    Current Opinion in Neurology

    (1997)
  • S Bookheimer

    Functional MRI of language: New approaches to understanding the cortical organization of semantic processing

    Annual Review of Neuroscience

    (2002)
  • C Buchel et al.

    A multimodal language region in the ventral visual pathway

    Nature

    (1998)
  • A Caramazza et al.

    The multiple semantics hypothesis: Multiple confusions?

    Cognitive Neuropsychology

    (1990)
  • A Carpentier et al.

    Functional MRI of language processing: Dependence on input modality and temporal lobe epilepsy

    Epilepsia

    (2001)
  • T Chaminade et al.

    Leader or follower? Involvement of the inferior parietal lobule in agency

    Neuroreport

    (2002)
  • L.L Chao et al.

    Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects

    Nature Neuroscience

    (1999)
  • L Cohen et al.

    The visual word form area. Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients

    Brain

    (2000)
  • L Cohen et al.

    Language-specific tuning of visual cortex? Functional properties of the visual word form area

    Brain

    (2002)
  • M Coltheart

    The MRC psycholinguistic database

    Quarterly Journal of Experimental Psychology

    (1981)
  • A.R Damasio

    The brain binds entities and events by multiregional activation from convergence zones

    Neural Computation

    (1989)
  • A.R Damasio

    Aphasia

    New England Journal of Medicine

    (1992)
  • H Damasio et al.

    A neural basis for lexical retrieval

    Nature

    (1996)
  • S Dehaene et al.

    The visual word form area: A prelexical representation of visual words in the fusiform gyrus

    Neuroreport

    (2002)
  • J.B Demb et al.

    Semantic encoding and retrieval in the left inferior prefrontal cortex: A functional MRI study of task difficulty and process specificity

    Journal of Neuroscience

    (1995)
  • J.F Demonet et al.

    The anatomy of phonological and semantic processing in normal subjects

    Brain

    (1992)
  • J.F Demonet et al.

    A PET study of cognitive strategies in normal subjects during language tasks. Influence of phonetic ambiguity and sequence processing on phoneme monitoring

    Brain

    (1994)
  • Cited by (157)

    View all citing articles on Scopus
    View full text