Haptic study of three-dimensional objects activates extrastriate visual areas

https://doi.org/10.1016/S0028-3932(02)00017-9Get rights and content

Abstract

In humans and many other primates, the visual system plays the major role in object recognition. But objects can also be recognized through haptic exploration, which uses our sense of touch. Nonetheless, it has been argued that the haptic system makes use of ‘visual’ processing to construct a representation of the object. To investigate possible interactions between the visual and haptic systems, we used functional magnetic resonance imaging to measure the effects of cross-modal haptic-to-visual priming on brain activation. Subjects studied three-dimensional novel clay objects either visually or haptically before entering the scanner. During scanning, subjects viewed visually primed, haptically primed, and non-primed objects. They also haptically explored non-primed objects. Visual and haptic exploration of non-primed objects produced significant activation in several brain regions, and produced overlapping activation in the middle occipital area (MO). Viewing visually and haptically primed objects produced more activation than viewing non-primed objects in both area MO and the lateral occipital area (LO). In summary, haptic exploration of novel three-dimensional objects produced activation, not only in somatosensory cortex, but also in areas of the occipital cortex associated with visual processing. Furthermore, previous haptic experience with these objects enhanced activation in visual areas when these same objects were subsequently viewed. Taken together, these results suggest that the object-representation systems of the ventral visual pathway are exploited for haptic object perception.

Introduction

Vision is the primary sensory modality that humans and other primates use to identify objects in their environment. Nevertheless, we also use our sense of touch (haptics) to perceive the shape, the size, and other characteristics of objects. In many cases, vision and haptics provide the same information about the object’s structure and surface features. Although both systems can be used to identify objects, there are clear differences in the nature of the information that is immediately available to each system. The haptic system can receive information only about objects that reside within personal space, i.e. those objects that are within arm’s reach. The visual system can receive information not only about objects that reside within personal space, but also those that are at some distance from the observer. When objects are at a distance, only the surfaces and parts of an object that face the observer can be processed visually. When objects are within reach, however, the object can be manipulated, thus, revealing the structure and features of the previously unseen surfaces and parts.

Despite the differences in the information that is available to vision and haptics, there is evidence that the higher-order processing of the two systems may deal with their respective inputs in much the same way. For example, in most situations, visual recognition of objects is viewpoint dependent. That is, if an object is visually explored from a particular viewing angle, recognition will be better for that view of the object than for other views [9], [11]. Although the concept of ‘viewing angle’ in haptic exploration of fixed objects is not as well-defined as it is in vision, recent work by Newell et al. [16] has shown that haptic recognition of objects is much better when the study and test ‘views’ are the same. This suggests that the information about the structure of the object may be stored in a similar way by the visual and haptic systems.

In fact, there is some speculation that visual and haptic object representations are so similar that they might actually be shared between the two modalities. Reales and Ballesteros [19] used cross-modal priming between vision and haptics to show that exposure to real objects in one modality affected later naming of the objects when they were presented using the other modality. The term ‘priming’ refers to the facilitative effect that prior exposure to a stimulus has on the perception of that stimulus during a subsequent encounter. People are usually quite unaware of this facilitation. In a cross-modal experiment then, subjects would first be exposed to objects in one modality and then would be required to identify or discriminate between the same objects presented in the other modality. Interestingly, cross-modal priming and within-modality priming resulted in similar effect sizes, suggesting that activation of a haptic representation produces equal activation of a visual representation and vice versa. Although it is possible that this co-activation could be mediated by semantic or verbal representations of the objects, the fact that babies as young as 2 months of age, as well as chimpanzees [25], can perform cross-modal (visual-to-haptic) matching tasks, provides further evidence that visual and haptic object representations are shared between modalities. Finally, there is recent evidence to suggest that cross-modal recognition is viewpoint specific. In other words, an object studied haptically at one particular ‘view’ will be visually recognized better at that view than at other views [16]. Taken together, these studies suggest that visual and haptic representations overlap and that this overlap occurs at the level of three-dimensional shape representations, not at a more abstract level. In other words, the cross-modal representation depends more on the object’s geometry than on its semantic labels and associations.

The behavioral evidence, then, suggests that vision and haptics represent the shape of objects in the same way. It is possible, therefore, that these two sensory systems could also share a common neural substrate for representing the shape of objects. Three studies suggest that the neural substrate underlying visual and haptic object recognition is found within extra-striate cortex. Two of these three studies [1], [3] had subjects perform haptic object identification tasks while measuring brain activation using functional magnetic resonance imaging (fMRI). These investigators found that, compared to a control task, identifying objects haptically produced activation in extra-striate cortex, in addition to other regions. The third study [29] used transcranial magnetic simulation (TMS), a technique that produces a brief magnetic pulse that is intended to disrupt the processing occurring in a region of cortex. They applied TMS to different regions of cortex while subjects were asked to identify the orientation of a grating that was placed on their finger. When TMS was applied to the occipital cortex contralateral to the hand being used, subjects were not able to perform the task, but performed normally when TMS was applied to the ipsilateral occipital cortex.

The authors of two of the studies described earlier [3], [29] concluded that visual imagery is invoked during haptic object recognition. Deibert et al. [3], who observed fMRI activation in occipital cortex during haptic exploration, argued that visual imagery is a byproduct of haptic processing that is not essential for recognition. Zangaladze et al. [29], however, who used TMS to disrupt occipital cortex, argued that visual imagery is necessary for successful haptic object recognition. The authors of the remaining fMRI study [1], while agreeing that visual imagery was invoked during haptic object recognition, did not believe that the activation in occipital cortex was entirely due to visual imagery. Amedi et al. [1] found that imagining objects resulted in much less activation in occipital cortex than haptically exploring them did. They concluded that some other mechanism besides visual imagery must be involved in activating the occipital cortex during haptic object recognition.

Although Amedi et al. [1] found that the region of occipital cortex that was activated during haptic object recognition did not show much activation for imagined objects, they did find that it responded strongly to the visual presentation of those objects. This region of occipital cortex, which is located in the ventral occipital cortex, has been found to respond preferentially to object stimuli in many other neuroimaging studies [8], [12], [13], [14], [15], [18], [23]. If, as Amedi et al. [1] suggest, the activation in ventral occipital cortex during haptic object processing is not due primarily to visual imagery, then perhaps it is due to the activation of a common neural substrate for visual and haptic object recognition. As suggested earlier, the idea of a common representation is supported by behavioral experiments that have studied cross-modal transfer between vision and haptics using priming paradigms [4], [5], [19].

In the present study, we used a cross-modal priming method in combination with high-field fMRI to investigate the neural interactions between vision and haptics. It has been argued that cross-modal priming paradigms are a good tool for investigating the extent to which the representations of two different sensory modalities overlap [19]. Thus, we used a cross-modal priming paradigm to investigate whether or not there is a common neural substrate underlying the visual and haptic representations of object shape. Because we were interested in cross-modal representations of object shape, and consequently wished to minimize the possibility of semantic encoding or verbal labeling mediating any cross-modal priming, we used a set of three-dimensional novel objects made out of clay (Fig. 1). Previous priming studies that have used sets of novel objects have found discrepant results in terms of whether priming produces an increase or a decrease in brain activation [10], [21], [27]. Regardless of the direction of the priming effect in these different studies, the effect was always consistently in the same direction for each particular set of novel objects. Thus, our prediction was that the change in activation due to priming would be in the same direction regardless of whether the objects were studied visually or haptically.

Section snippets

Subjects

Subjects were six right-handed graduate students attending the University of Western Ontario. All subjects reported normal or corrected-to-normal visual acuity and no known neurological or visual disorders. Ages ranged from 24 to 36 years with a mean age of 27.2 years. The ethical review boards of both the University of Western Ontario and the Robarts Research Institute approved a protocol for the procedure.

Stimuli

A set of 48 novel three-dimensional clay objects were selected from a larger, existing

Results

The imaging data were analyzed using the Brain Voyager™ three-dimensional analysis tools. Anatomical MRIs for each subject were transformed into a common brain space [26]. Functional MRIs for each subject were motion corrected on an individual slice basis and blurred in space and time using a Gaussian filter with a full-width-half-maximum of two pixels (6 mm) and two volumes (2 s), respectively. The functional images were then aligned to the transformed anatomical MRIs, thereby transforming the

Discussion

Haptic and visual exploration of novel three-dimensional clay objects produced activation in several different brain areas. Some brain regions showed activation during both kinds of exploration whereas others showed activation that was specific to either haptics or vision. Visual-to-visual and haptic-to-visual priming with the same objects increased the level of activation in a subset of these same areas, namely those that were activated by visual stimuli in the exploration phase. To our

Acknowledgements

The research was supported by grants from the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chairs Program, and the Canadian Institutes of Health Research. Thanks to Karin H. James, Stefan Kohler, Susanne Ferber and two anonymous reviewers for their helpful suggestions on an earlier version of this manuscript. Thanks to Joseph F.X. Desouza for the discussions about the labeling of functional brain regions in Fig. 2, Fig. 5.

References (29)

  • K. Grill-Spector et al.

    A sequence of object-processing stages revealed by fMRI in the human occipital lobe

    Human Brain Mapping

    (1998)
  • N. Hadjikhani et al.

    Cross-modal transfer of information between the tactile and the visual representations in the human brain: a positron emission tomographic study

    The Journal of Neuroscience

    (1998)
  • E. Halgren et al.

    Location of human face-selective cortex with respect to retinotopic areas

    Human Brain Mapping

    (1999)
  • K.L. Harman et al.

    Encoding ‘regular’ and ‘random’ sequences of views of novel three-dimensional objects

    Perception

    (1999)
  • Cited by (328)

    • Self-reported vividness of tactile imagery for object properties and body regions: An exploratory study

      2022, Consciousness and Cognition
      Citation Excerpt :

      In addition, Zhang et al. (2004) found that the vividness of visual imagery, as quantified through the Vividness of Visual Imagery Questionnaire (VVIQ; Marks, 1973), correlated with the degree of haptic shape-selective activity the right lateral occipital complex. The lateral occipital complex has previously been implicated in visual as well as haptic perception of familiar objects (Amedi et al., 2001; James et al. 2002). However, the relative involvement of vision in tactile tasks may be dependent on the task itself.

    • ‘Visual’ cortices of congenitally blind adults are sensitive to response selection demands in a go/no-go task

      2021, NeuroImage
      Citation Excerpt :

      In blindness, such top-down attentional systems may enhance their influence on the visual cortex and broaden it to include auditory tasks. Indeed, although ‘visual’ cortex responses to the infrequent-go condition was more pronounced and widespread in the congenitally blind group, we observed greater responses to infrequent-go relative to frequent-go trials around the foveal confluence even in blindfolded sighted participants (Fig. 3) (for other examples of cross-modal responses in sighted participants see Driver and Noesselt, 2008; James et al., 2002; Macaluso et al., 2000; Merabet et al., 2004, 2008; Sathian et al., 1997; Zangaladze et al., 1999; Zangenehpour and Zatorre, 2010). Note, however, that attentional signals driven by stimulus frequency cannot explain the preference for no-go compared to infrequent-go trials in the ‘visual’ cortex of the blind group.

    View all citing articles on Scopus
    View full text