Review articleRedefining the resolution of semantic knowledge in the brain: Advances made by the introduction of models of semantics in neuroimaging
Introduction
The neurobiological basis of semantic knowledge is a key topic in cognitive neuroscience. On average, human adults know between 16,000 and 58,000 words (Keuleers et al., 2015) and across the lifespan we continue to expand our mental lexicon (Shafto and Tyler, 2014). A vital characteristic of the mental lexicon is that many related words exist, referring to similar but discrete concepts (e.g. cucumber versus zucchini, blackbird versus crow). However, in the last decades, a common strategy to map the semantic network is to examine response differences elicited by stimuli (pictures, words) referring to very different entities, for instance faces, buildings, animals and tools. Comparing across these broad categories then results in a rough map of representations of these concepts in the brain. This approach has resulted in several seminal insights in the neurobiology of language (for a review: Binder et al. (2009); Price (2012)). For one, it demonstrated which regions of the brain are activated by semantic processing and which are not. However, the results do not shed any light on how our brains represent meaning for all but the coarsest conceptual differences (e.g. distinguishing furniture and animal concepts). Recent advances in cognitive science, linguistics, and neuroimaging resulted in the use of fine-grained semantic models that can capture in depth the semantic similarity and differences between closely related concepts by replacing one-dimensional characterization (e.g. animate-inanimate) by a multi-dimensional approach derived from a large number of variables (e.g. concept features). Using multivariate analysis, the behavioural relationship between the concepts can be compared to response patterns in the brain elicited by the same concepts (Kriegeskorte et al., 2006). The goal of this paper is to critically review different types of semantic models and to demonstrate how they can be used to elucidate the neurobiology of language using neuroimaging. We review similarities and differences between these recent semantic models and describe how they have led to several breakthroughs in the field of semantic processing. We also propose novel approaches to implement subject-specific semantic models in order to study semantic deficits using neuroimaging. Ultimately, learning about the structure of our lexicon allows us to learn the way our brain structures information.
Section snippets
What are semantic models?
The most import aim of constructing semantic models is to provide a quantifiable version of our mental lexicon. Semantic models can be generated using very different strategies. These types of semantic models can represent semantic similarity between any pair of concepts and provide a continuous numerical estimate of their proximity in space. Such an approach contrasts with the “categorical” approach, where a binary measure is used for every concept that reflects whether or not a concept is
How can semantic models be tested on neuroimaging data?
Semantic models provide a quantitative way of describing the pairwise relationships between concepts, and we ask whether this semantic information can partially explain the variable neuronal response to these concepts. As discussed in Naselaris and Kay (2015), multivariate pattern analysis (MVPA) techniques can be used without semantics models, for instance by using semantic categories. However, the use of semantic models have several advantages. particular, they do not require a priori
Advances made by using semantic models
Following the introduction of semantic models and the most common approaches to assess whether neural activity changes reflect semantic information, we discuss a number of advances made in characterizing the neurobiology of language using models of semantics. First, we discuss how semantic models have been successfully implemented to test a priori hypotheses about semantic information processing in the anterior temporal lobe. This is a specific instance where semantic models convincingly
Future research using models of semantics
A next step would be the introduction to semantic models in neuroimaging in patients with a semantic deficit. Only a few attempts have been made to map the semantic space of patients, and most work has been performed in patients suffering from psychiatric diseases (de Boer et al., 2018) and in Alzheimer’s disease (Pakhomov et al., 2015, 2012; Pakhomov and Hemmy, 2014). A notable study in the domain of PPA analyzed unconstrained speech of patients with the semantic variant of PPA (Hoffman et
Conclusion
The introduction of semantic models to functional neuroimaging of the semantic network has been overall very successful. Various multivariate approaches can be used to test whether neural activity patterns, measured using fMRI, MEG or intracranial recording, reflect semantic information. An advantage of semantic models is that they offer a model that can be used to study the meaning of concepts derived from pictures but also across other modalities such as written and spoken words. Because of
Funding
This work was supported by Federaal Wetenschapsbeleid [Belspo 7/11]; FWO [Grant no. G0925.15] and KU Leuven [OT/12/097, C14/17/108]. R.V. is a Senior Clinical Investigator and R.B. is a postdoctoral fellow of the Research Foundation Flanders (FWO). S.D.D. was supported through ARC grant DE140101749.
Glossary
- Convolutional neural network (CNN)
- a computational architecture containing a series of layers likening the architecture of the visual cortex. Across the layers, the input data is transformed. Layers can be fully connected and contain an equal number of units, or during convolution, inputs across a layer can be pooled to a smaller number of units in the next layer
- Word Embeddings
- n-dimensional real-value vectors with n typically ranging between 1 to 1000 elements. These vectors capture the
References (108)
- et al.
Reading visually embodied meaning from the brain: visually grounded computational models decode visual-object mental imagery induced by written text
NeuroImage
(2015) - et al.
Memorability: a stimulus-driven perceptual neural signature distinctive from memory
NeuroImage
(2017) - et al.
The neurobiology of semantic memory
Trends Cogn. Sci.
(2011) - et al.
Word meaning in the ventral visual path: a perceptual to conceptual gradient of semantic coding
NeuroImage
(2016) - et al.
Right fusiform response patterns reflect visual object identity rather than semantic similarity
NeuroImage
(2013) - et al.
The ‘when’ and ‘where’ of semantic coding in the anterior temporal lobe: temporal representational similarity analysis of electrocorticogram data
Cortex
(2016) - et al.
Controlled semantic cognition relies upon dynamic and flexible interactions between the executive ‘semantic control’ and hub-and-spoke ‘semantic representation’ systems
Cortex
(2018) - et al.
Understanding what we see: how we derive meaning from vision
Trends Cogn. Sci.
(2015) - et al.
An attractor model of lexical conceptual processing: simulating semantic priming
Cogn. Sci.
(1999) - et al.
Clinical use of semantic space models in psychiatry and neurology: a systematic review and meta-analysis
Neurosci. Biobehav. Rev.
(2018)