Abstract
Prevailing theory holds that the medial temporal lobe (MTL) subserves declarative memory exclusively, whereas nondeclarative memory is independent of this brain region. Recent studies in patients with amnesia, however, have shown that performance on declarative memory tasks may not always be dependent on a single MTL memory system, instead highlighting the critical role of anatomically distinct structures in processing different stimulus types. In particular, the hippocampus has been implicated in spatial memory, whereas perirhinal cortex seems critical for object memory. To assess whether stimulus type would also be a key dimension in nondeclarative memory, patients with selective hippocampal lesions were tested on simple categorization and perceptual learning of faces and virtual reality scenes. The patients demonstrated preserved categorization and perceptual learning of faces but abnormal performance when the stimuli to be discriminated were virtual reality scenes. These findings imply that stimulus type may be a more critical predictor of performance on memory tasks (declarative and nondeclarative) than previously thought. They also suggest that reports of good nondeclarative memory after MTL damage may, in some cases, simply reflect the use of stimuli that fail to tap the processes dependent on structures in this region, such as spatial processing in the case of the hippocampus.
- nondeclarative memory
- scene discrimination
- object discrimination
- hippocampus
- amnesia
- medial temporal lobe
Introduction
It is widely accepted that the medial temporal lobe (MTL) supports declarative memory (conscious recollection of facts and events), whereas nondeclarative memory (cumulative improvement in performance without conscious awareness) is dependent on other brain regions preserved in amnesia (Squire and Zola-Morgan, 1991; Squire, 1992; Cohen and Eichenbaum, 1993; Schacter et al., 1993; Gabrieli, 1998; Eichenbaum, 2004; Squire et al., 2004). Two examples of nondeclarative tasks are categorization (in which stimuli are separated into sets with reference to perceived similarities and differences) and perceptual learning [in which, after experience with a set of stimuli, subjects show better discrimination than if the stimuli are completely novel (Hall, 1991; Goldstone, 1998; Saksida, 1999)]. Studies of categorization in amnesic patients have typically reported similar levels of learning to those seen in controls, despite poor subsequent recognition of the studied items (Knowlton and Squire, 1993; Kolodny, 1994; Reed et al., 1999; Filoteo et al., 2001) (but see Zaki et al., 2003; Zaki, 2004). Perceptual learning in amnesia is also thought to be intact (Crovitz et al., 1981; Kapur, 1988; Manns and Squire, 2001; Fahle and Daum, 2002), although this is not always the case (Chun and Phelps, 1999).
Although there seems, therefore, considerable evidence for an anatomical distinction between declarative and nondeclarative memory, recent studies challenge this account by revealing deficits that are not easily attributable to declarative memory after MTL damage. Amnesic patients have been shown to demonstrate impairments in spatial scene discrimination after hippocampal damage, with additional deficits in object discrimination if there is also involvement of the perirhinal cortex (Lee et al., 2005a,b,c). These findings imply that MTL structures may not be exclusively involved in declarative memory and indicate specialization of function within the MTL along the lines of space and object perception, consistent with recent findings in nonhuman primates indicating that perirhinal cortex may be critical for the representation of complex conjunctions of object features (Murray and Bussey, 1999; Buckley et al., 2001; Bussey and Saksida, 2002; Bussey et al., 2002, 2003). Although there is less evidence from the nonhuman primate literature for a similar account of hippocampal function (Gaffan, 2001; Buckley et al., 2004; Saksida et al., 2006a,b), the pattern in amnesic cases with selective hippocampal damage raises the possibility that the hippocampus may be critically involved in processing conjunctions of spatial features (Lee et al., 2005b).
An extension of this view is that performance on traditional tests of nondeclarative memory may be similarly stimulus dependent. To test this hypothesis, three amnesic patients with selective bilateral hippocampal damage performed simple tests of categorization and perceptual learning comprising nonspatial (faces) and spatial (virtual reality rooms) stimuli. Contrary to the accepted view, we predicted performance on both nondeclarative measures would be abnormal in these patients for scenes but not for faces.
Materials and Methods
Participants
Three patients with partial but relatively selective, bilateral damage to the hippocampus participated in this study (average ± SD age, 45.0 ± 1.7 years; average ± SD education, 14.3 ± 3.8 years). Two patients had experienced anoxia, one after a prolonged period of status epilepticus and another attributable to carbon monoxide poisoning. The third was diagnosed with viral encephalitis. Because the extent of the lesion in the three cases was critical to the hypotheses under investigation, magnetic resonance imaging (MRI) scans in the three patients were evaluated using two separate methods: (1) detailed rating of a number of temporal lobe brain regions based on a novel rating scale that focused on MTL regions (Lee et al., 2005c) and (2) using MRIcro (Rorden and Brett, 2000) to document which brain regions were damaged in all three cases. The results of these evaluations are shown in Table 1and Figure 1.
Structural MRI scan ratings for various brain regions (ordered from anterior to posterior location in the brain) for each individual patient (averaged across hemispheres)
The lesion overlap map for the hippocampal group (n = 3). Purple, One patient showed atrophy in this region; green, two patients showed atrophy in this region; red, all three patients showed atrophy in this region.
The visual rating method was based on a scale that has been validated against volumetric measures and emphasizes medial temporal lobe regions (Galton et al., 2001). A total of nine regions were rated, including the following: (1) anterior hippocampus, which was rated on the anteriormost pontine slice and based on the widths of the choroid fissure and temporal horn and the height of the hippocampal formation; (2) anterior temporal lobe, which was based on the CSF space between the back of the orbit and temporal pole; (3) amygdala, which was rated on the scan slice anterior to the tip of the temporal horn; (4) lateral temporal lobe, which was rated on the same slice as the anterior hippocampus and was based on the cortical thickness of the superior and middle temporal gyri; (5) posterior hippocampus, which was rated on the anteriormost slice through the cerebral aqueduct in parallel with the anterior measure and according to the width of the temporal horn and the height of the hippocampal formation; and finally (6) anterior parahippocampal gyrus, (7) medial bank of the collateral sulcus, (8) lateral bank of the collateral sulcus, and (9) occipitotemporal sulcus, which were all rated on the slice showing the collateral sulcus at its longest (Lee et al., 2005c). Other than the anterior hippocampus, which was rated on a five-point scale (normal, 0; severe atrophy, 4) based on Scheltens et al. (1992), all regions were assessed using a four-point scale (normal, 0; severe atrophy, 3), with ratings for each area averaged across both hemispheres.
A repeated-measures ANOVA was used to contrast the rating data for the three hippocampal patients with a group of 12 healthy control subjects (for details, see Lee et al., 2005c). This revealed a main effect of region (F(8,104) = 5.6; p < 0.001), which interacted with group (F(8,104) = 3.8; p < 0.005). Post hoc independent samples t tests were conducted to investigate this interaction further, and these showed that only the anterior hippocampus was significantly atrophied in the patient group (p < 0.009, corrected for multiple comparisons).
To analyze which areas were damaged in all three individual patients, the structural scans were first warped into Montreal Neurological Institute (MNI) space in SPM99 (Wellcome Department of Functional Neuroscience, London, UK) using a standard procedure for brain images with focal lesions (Brett et al., 2001). This involved creating a mask in MRIcro (Rorden and Brett, 2000) for each of the subjects’ lesions by delineating regions of CSF in the middle cranial fossae, including the inferior horn and choroid fissure, until a posterior limit of the end of the hippocampus. These masks were then used for cost function masked normalization of each brain to a standard T1 MNI template. After warping, the lesions of each patient were then redrawn and finally overlaid onto a single-subject T1 MNI template using MRIcro (Fig. 1). This process confirmed the results from the rating scale that all three patients had bilateral hippocampal damage, with some mild atrophy to the surrounding parahippocampal gyrus in two cases.
Neuropsychological testing revealed clear amnesia in the three cases with poor performance on immediate (mean ± SD, 18.5 ± 8.6%) and delayed (mean ± SD, 8.0 ± 11.3%) recall of a story (Wechsler Memory Scale-III) and chance-level delayed recognition of the same story (mean ± SD, 56.7 ± 2.9%). Recall of the Rey Complex Figure was also impaired (immediate, mean ± SD, 37.4 ± 16.9%; delayed, mean ± SD, 25.5 ± 21.8%). In contrast, recognition memory for faces was good (mean ± SD, 93.3 ± 4.6%). Visuoperceptual performance appeared normal, as measured by the Visual Object and Space Perception battery (all subtests passed), Benton Face Recognition Test (mean ± SD, 87.7 ± 3.0%), and copy of the Rey Complex Figure (mean ± SD, 99.1 ± 1.6%). There was no evidence of semantic impairment, with all patients performing normally on word–picture matching (100%), naming (mean ± SD, 98.4 ± 1.6%), and the pictures subtest from the Pyramids and Palm Trees Test (mean ± SD, 98.7 ± 1.1%).
Two patients were also given a commonly used dot prototype learning paradigm (Knowlton and Squire, 1993) in which participants were trained on a set of 40 high-distortion exemplars of a single dot pattern. Subsequently, patients were tested on their ability to classify a series of 84 patterns (4 prototype, 20 low distortion, 20 high distortion and 40 random patterns) on the basis of membership to the trained category. Consistent with the published literature on this task (Knowlton and Squire, 1993; Kolodny, 1994; Squire and Knowlton, 1995; Zaki et al., 2003), our two patients showed intact category learning (mean ± SD; controls, prototype, 88 ± 11%; low distortion, 80 ± 13%; high distortion, 65 ± 11%; and random, 23 ± 8%; patients, prototype, 88 ± 18%; low distortion, 68 ± 4%; high distortion, 48 ± 25%; and random, 44 ± 12%; all values percentage endorsed). Analyses of the patient’s data contrasted with 12 age- and education-matched controls (average ± SD age, 50.9 ± 3.7 years; average ± SD education, 14.2 ± 2.6; no significant differences in terms of age and education between the patient and control group, both t <1.6; p > 0.7) revealed a main effect of stimulus type (F(3,36) = 19.6; p < 0.001) and no effect of group (F(1,12) = 0.17; p = 0.69). The interaction was also nonsignificant (F(3,36) = 2.69; p = 0.061); although there was a trend toward a difference, this was driven by greater endorsement of random items by patients compared with controls (p = 0.04, corrected for multiple comparisons). The numerical performance of the amnesic patients on this condition was, however, equivalent to that seen in the original paper by Knowlton and Squire (1993).
Twelve neurologically healthy, age-matched control participants were recruited from the Medical Research Council Cognition and Brain Sciences Volunteer Panel to undertake the experimental tasks (average ± SD age, 49.8 ± 6.0; average ± SD education, 14.4 ± 2.2). There were no significant differences in terms of age and education between the patient and control groups (both t < 1.0; p > 0.3). All participants gave informed consent for inclusion in the study, and ethical approval was given by the Cambridge Health Authority Local Research Ethics Committee (United Kingdom).
Stimuli
Two experimental stimulus types (faces and virtual reality scenes) were used in the experiment (Fig. 2). A third stimulus type (abstract art) was used as an irrelevant filler in the categorization task as described below. Because the aim of the study was to investigate facilitation of discrimination learning between morphed pairs of images, with initial categorization (i.e., participants identified which of two prototype stimuli were most similar to a morphed exemplar) followed by discrimination (i.e., participants responded whether a pair of stimuli were same or different), a continuum of stimuli was constructed for each type. Two perceptually similar images were chosen for both the faces and scenes, and a continuum of 30 stimuli was created for each condition by blending, according to user-specified ratios, two prototype images using commercially available morphing software (Morpheus Photo Animator; ACD Systems, Saanichton, British Columbia, Canada). Within the continuum, 15 pairs of images were thus created with the aim of ensuring that pictures within each stimulus pair could not be discriminated on the basis of one conspicuous feature at any level of overlap. Morph levels for each stimulus set were determined by piloting of the discrimination task in controls so that, for each stimulus set, it was equally difficult to discriminate between the pairs. As such, absolute levels of morph were different for each stimulus set. The 15 pairs were then divided equally into three levels of increasing difficulty, with level 1 containing the least feature overlap and level 3 the most. It was hoped that this variability in difficulty would provide a means of demonstrating that the more difficult the discrimination, the greater the perceptual learning (McLaren and Mackintosh, 2000). In fact, although there was a main effect of level on categorization (both for percentage correct and reaction time), difficulty did not significantly influence perceptual learning regardless of outcome measure. There were also no significant two-way (level × group; level × stimuli) or three-way (level × stimuli × group) interactions in categorization and perceptual learning, the latter revealing that difficulty of discrimination did not significantly influence performance on either of the two tasks differentially for the patients compared with the controls. Consequently, the outcome measures included in the statistical analyses reported below combined scores across difficulty level.
An example trial from the faces (a) and the scenes (b) categorization conditions. For each condition, the two category reference images were displayed at the top, and the bottom image was the example to be categorized.
Procedure
All testing was conducted using a laptop computer running Windows 98 (Microsoft, Seattle, WA). The program was written in Visual Basic (Microsoft), and subjects indicated their responses by pressing designated keys on the laptop keyboard. Subjects were provided with detailed instructions and undertook a practice task before testing. For each experimental stimulus type, the task involved two components, a categorization phase and then a discrimination phase.
Categorization.
Three images were displayed on the screen: the two reference images that represented the extremes of the morphed continuum (i.e., the prototype stimuli), plus an example image from the morphed continuum (Fig. 2, a, faces, b, scenes). The participants were required to categorize the morphed stimulus by pressing one of two designated keys on the laptop keyboard. There were 90 trials, with three repetitions of each of the 30 stimuli. The categorization phase also doubled as an exposure task for the subsequent discrimination phase. Consequently, there were two types of categorization: relevant filler task, in which the stimuli categorized were later to be used as now familiar stimuli for the discrimination phase, and an irrelevant filler task, always presented first as an introductory task, in which the stimuli to be categorized were abstract art images otherwise not involved in the experiment. No feedback was provided about whether the image selected was correct, and, although no time limit was imposed, all participants were asked to respond quickly and accurately.
Discrimination.
The discrimination phase was presented after each categorization phase, after a 30 s break. There were 60 same–different trials in which two pictures from the continuum described above were displayed on the touch screen. The participant was required to indicate on the keyboard whether they thought the images were identical or whether there was a difference between them (Fig. 3, a, faces, b, scenes). On 30 trials, the correct response was “same,” in which each of the 30 stimuli were presented in duplicate on the screen. On the other 30 trials, the correct response was “different,” in which the 15 pairs of stimuli (see above, Stimuli) were presented side by side on the screen, twice each.
An example trial from the faces (a) and the scenes (b) conditions of the same–different discrimination task.
In all conditions for both tasks, the order of presentation was pseudorandomized and fully counterbalanced. There was no feedback provided about whether the image selected was correct, and, although no time limit was imposed, all participants were encouraged to respond as quickly and accurately as possible.
The discrimination phase provided a means of investigating whether perceptual learning had occurred as a result of exposure to the stimuli during the relevant categorization task. Because the perceptual learning experiment was conducted within subject, with participants receiving both novel and familiar discriminations (i.e., preceded by irrelevant and relevant categorization, respectively), it was necessary to have two sets of stimuli for both the faces and virtual reality scenes. As a result, half of the control participants were presented with set 1 (faces and scenes) for categorization and discrimination, whereas set 2 (faces and scenes) were only seen at discrimination, after the irrelevant abstract art categorization phase. The opposite procedure was used with the remaining control participants. Order of administration of faces and scenes was counterbalanced across all participants. Whereas one patient was exposed to set 1 during categorization and discrimination, the other two cases were trained on set 2. Two patient participants received faces before scenes, whereas the other individual was administered scenes before faces.
To avoid unwanted effects of generalization of perceptual learning between the two same-type stimulus sets, the novel discrimination was always administered before the familiar discrimination. Any generalization of perceptual learning would therefore enhance the perceptual learning effect in all participants rather than reducing it for some (a pattern evident when a counterbalanced procedure was piloted in control participants). The order of presentation of the two tasks was, therefore, categorization (irrelevant abstract art) → novel discrimination → categorization (relevant stimuli) → familiar discrimination for each stimulus type.
Measures of learning
The two tasks provided two different measures of nondeclarative learning. The first, based solely on performance on the categorization task, involved contrasting the patients’ performance on faces and scenes with that of control participants. Given the findings by Lee et al. (2005c) on a similar type of task, we predicted that the hippocampal cases would show impaired categorization for scenes but not for faces. The second was obtained by measuring the improvement in performance on the discrimination tasks when the stimuli were familiar (i.e., had been seen previously in the categorization phase) compared with when the stimuli were novel (i.e., the preceding categorization had used the irrelevant abstract art stimuli). This is a standard procedure for assessing perceptual learning (Gibson and Walk, 1956; Honey and Hall, 1989; Mackintosh et al., 1991; Aitken et al., 1996), and we predicted that, although there would be evidence for perceptual learning for both types of stimuli in control participants, the patients would only show perceptual learning for faces, the processing of which we presumed to be dependent on perirhinal cortex rather than the hippocampus. Performance was measured with reference to accuracy (percentage correct) and raw reaction time (milliseconds).
Statistical analysis
For all subjects, the first trial of each condition was disregarded for analysis. Repeated-measures ANOVAs were conducted on both accuracy scores (as measured by percentage correct) and reaction time. Significant interactions were investigated further using univariate ANOVAs, and a significance threshold of p < 0.05 was adopted throughout.
Results
Categorization
To ensure that the counterbalanced variable of set (see Materials and Methods) did not have a significant effect on accuracy or reaction time and to determine whether there was any interaction of set with stimulus type, a repeated-measures ANOVA was performed on the control data only. The ANOVA included a within-subject variable of stimulus (faces vs scenes) and a between-subject variable of set (1 vs 2). Critically, there were no significant main effects of either stimulus type or set, nor was there any interaction. Accordingly, data from the two sets were combined for subsequent analysis.
Figure 4 (a, accuracy; b, reaction time) shows the data from both participant groups split according to stimulus type. Although performance on the faces categorization task in the two groups was relatively similar, for both accuracy and reaction time, the patients correctly categorized fewer scenes than controls (67 vs 88%) and took considerably longer (>1 s) when doing so.
Scores on the faces and scenes categorization tasks for the controls (white bars) and hippocampal patients (gray bars). a, Percentage correct; b, reaction time (milliseconds). ∗p < 0.05, significant difference between the two groups.
A repeated-measures ANOVA on the accuracy data, with a within-subject variable of stimulus and a between-subject variable of group (control vs hippocampal), confirmed these initial conclusions. Significant main effects of stimulus (F(1,13) = 6.3; p < 0.03) and group (F(1,13) = 9.8; p < 0.01) were evident, as well as a significant interaction between the two (F(1,13) = 7.9; p < 0.02). Additional univariate ANOVAs revealed that there was a significant group effect, with the hippocampal patients categorizing less accurately than controls, for the virtual reality scenes (F(1,13) = 18.8; p < 0.001) but not for faces.
The similar ANOVA investigating reaction time revealed only an interaction between stimulus and group (F(1,13) = 11.8; p < 0.004). Univariate ANOVAs confirmed that the hippocampal patients showed a trend toward slower categorization of virtual reality scenes as predicted (F(1,13) = 4.2; p < 0.06) but not for faces.
Discrimination
To ensure that preexposure to the faces and virtual reality scenes actually resulted in perceptual learning in healthy controls, both the accuracy and reaction time data were analyzed in the controls only. Repeated-measures ANOVAs with within-subject variables of stimulus (faces vs scenes) and familiarity (familiar vs novel) and a between-subject variable of set (1 vs 2) revealed a main effect of familiarity only (accuracy, F(1,11) = 10.7, p < 0.007; reaction time, F(1,11) = 6.0, p < 0.04). Categorization of both faces and scenes before undertaking the discrimination task with these stimuli, therefore, resulted in improved accuracy and faster reaction times in control participants for both types of complex stimuli. No other main effects or interactions reached significance. Consequently, as for the categorization task, data from both sets were combined for each stimulus type in additional analyses.
Because perceptual learning was evident for both stimulus types in healthy controls, difference scores (calculated for each subject for both scenes and faces by subtracting the score for novel stimuli from that obtained for familiar items) were used to contrast the level of perceptual learning attained by the hippocampal patients with that of the controls. These difference scores are displayed in Figure 5 (a, accuracy; b, reaction times), in which a positive score for accuracy and a negative score for reaction time is evidence of perceptual learning. What is immediately obvious from the two figures is that, although the hippocampal group showed some improvement in accuracy for both faces and scenes, different profiles were evident on the reaction time measure, with a speeding of response for faces but a striking slowing for scenes.
Difference scores (novel score subtracted from familiar score) on the faces and scenes same–different discrimination task for controls (white bars) and hippocampal patients (gray bars). a, Percentage correct; b, reaction time (milliseconds); c, inverse efficiency measure. ∗p < 0.05, significant difference between the two groups.
To investigate these findings, repeated-measures ANOVAs were used, with a within-subject variable of stimulus and a between-subjects variable of group, using the difference score data. Although no significant main effects, or an interaction, were revealed in the accuracy ANOVA, a significant interaction between stimulus and group was found for reaction time (F(1,13) = 7.0; p < 0.02). Univariate ANOVAs confirmed that the hippocampal group were performing significantly differently from their controls in the scenes condition (F(1,13) = 10.5; p < 0.006) but not in the faces condition.
These analyses suggest that perceptual learning of scenes is not normal in individuals with hippocampal damage: although the patients accurately discriminated more familiar than novel scenes, they showed a striking increase in reaction time after previous experience with the spatial stimuli. This is a reverse of the improved speed of responding typically seen in perceptual learning and demonstrated by our healthy controls for both faces and scenes. That is to say that the patients appear to be demonstrating a “speed–accuracy trade-off.” To compensate for this, the data can be analyzed using an inverse efficiency measure, which combines reaction time and accuracy (Townsend and Ashby, 1978, 1983). Inverse efficiency was therefore calculated separately for each of the novel and familiar stimulus conditions for each subject and is defined as mean reaction time divided by the proportion of trials correct. These data are displayed as difference scores in Figure 5c, in which more negative scores are evidence of greater perceptual learning.
Figure 5c confirms that the patients and controls show a different pattern of performance on scenes but not on faces. Although both groups show a negative inverse efficiency change for faces (slightly numerically greater in the case of the hippocampal group), the patients showed a large and positive inverse efficiency change for the virtual reality scenes compared with a negative value in the control population. Statistical analyses confirmed a significant main effect of group (F(1,13) = 5.0; p < 0.05), which interacted with stimulus type (F(1,13) = 5.0; p < 0.05). Univariate ANOVAs revealed that the interaction could be accounted for by a difference between the groups for scenes, (F(1,13) = 9.0; p < 0.01) but not for faces.
Discussion
Contrary to the predominant view of human memory, which posits that MTL regions are critical for declarative but not nondeclarative memory (Eichenbaum, 2001, 2004; Squire et al., 2004), patients with selective, bilateral lesions to the hippocampus showed impaired performance on visual categorization and perceptual learning. Critically, performance was not indiscriminately impaired: selective deficits were evident only for virtual reality scenes, with normal categorization and perceptual learning for similarly complex face stimuli.
A clear division in performance between stimuli that placed a demand on processing spatial complexity versus object features was predicted based on recent investigations (Lee et al., 2005b,c, 2006). For example, Lee et al. (2005c) asked participants to match one of two stimuli from different points along a morphed continuum to an unchanging reference image taken from one end of the continuum. Patients with hippocampal damage were impaired when the stimuli were photographs of landscapes but not when faces or objects were presented. In contrast, patients with more extensive damage to the MTL, including the hippocampus and perirhinal cortex, performed poorly on both types of stimuli.
The categorization results reported here replicate these findings, broadening the deficits seen after hippocampal damage to morphed images of virtual reality scenes. This study also extends an additional investigation in which the same patients with hippocampal damage were impaired when asked to select the odd virtual reality room when presented alongside three different views of another virtual reality room (Lee et al., 2005b). In this oddity task, there was no repetition of stimuli or trials, and, consequently, the deficits in the patients cannot be attributed easily to declarative memory. Consistent with the current results, the patients showed normal oddity judgment for faces. Overall, therefore, patients with hippocampal damage demonstrate clear impairments on tasks that require them to discriminate, learn, and remember spatially complex stimuli but not faces or objects.
Studies in nonhuman primates provide an explanation for these findings (Gaffan, 2001). Such investigations have highlighted a critical role for perirhinal cortex in the representation and processing of complex object stimuli (Murray and Bussey, 1999; Buckley et al., 2001; Bussey and Saksida, 2002; Bussey et al., 2003). Monkeys and humans with perirhinal cortex lesions are impaired on concurrent object discriminations with a high, but not low, degree of “feature ambiguity,” a property of visual discriminations that emerges when complex objects have a large number of overlapping visual features (Bussey et al., 2002; Barense et al., 2005). These experiments suggest that rostral inferotemporal cortical regions, including the perirhinal cortex, contain representations of complex conjunctions of object features, whereas more caudal regions (e.g., visual areas V4 and TEO) house the components from which these conjunctions are formed. An extension of this view is that the hippocampus may resolve “spatial feature ambiguity” via the formation of representations with conjunctions of spatial features (Buckley et al., 2004; Lee et al., 2005a), although this hypothesis remains to be tested.
Consistent with this theoretical account is the abnormal perceptual learning for scenes seen in our hippocampal group. Whereas healthy participants showed improved accuracy and a speeding up of performance when discriminating the familiar stimuli, the hippocampal patients’ accuracy improved but, notably, their reaction times increased. One mechanism that has been proposed as contributing to perceptual learning is unitization (McLaren et al., 1989; Goldstone, 1998; McLaren and Mackintosh, 2000), the process by which features of a stimulus are integrated such that feature conjunctions are better represented, thus facilitating accuracy and speeding up response. In the hippocampal group, increased familiarity with the scenes may have resulted in better representation of the individual features of the scenes stored in more caudal temporal lobe regions, thereby improving accuracy, but at the cost of a more labored, time-consuming, serial analysis of the features of the stimulus. This hypothesis could be tested more systematically using eye-tracking techniques, which provide an implicit measure of memory (Althoff and Cohen, 1999). In Ryan et al. (2000), healthy controls showed an increase in fixations within a region of a repeated scene that had undergone manipulation since its last viewing (e.g., the deletion or addition of an object). Amnesic patients did not show this effect, suggesting poor memory for the relations among the constituent elements of scenes. Comparison of familiar and novel scenes using this procedure might provide a means to test the unitization hypothesis.
Although a distinction between object and spatial processing provides a parsimonious explanation for the data reported here (Barense et al., 2005; Lee et al., 2005a,b), it is not clear how this theory can account for the full range of published findings on categorization and perceptual learning. Our proposal that there is no clear distinction between declarative and nondeclarative memory in amnesia echoes proponents of single-system models of memory, in which good dot categorization in the context of impoverished recognition memory (Knowlton and Squire, 1993; Reed et al., 1999) is explained by differences in memory demands across tasks rather than by differences in memory systems (Nosofsky and Zaki, 1998; Kinder and Shanks, 2001, 2003; Zaki and Nosofsky, 2001; Zaki et al., 2003). Notably, however, these psychological models do not predict stimulus-specific effects; instead, because the main deficit in amnesia is an inability to discriminate among items in memory, they hypothesize impairments when memory demands are high. For example, Zaki et al. (2003) reported normal dot categorization when a single category was tested but impairment on a memory-intensive two-category task. To explain our data under the single-system account, greater task demands for processing of scenes over faces would need to be proposed, yet there was no evidence of differences in performance difficulty across these two conditions in control participants, nor was there any evidence of greater impairment in patients when face discriminations were made more difficult [see analysis in Materials and Methods and the study by Lee et al. (2005c)]. That said, the reported deficits by Zaki et al. (2003) in the two-category dot categorization task (but see Kolodny, 1994) is challenging to our stimulus-oriented account, which predicts preserved categorization and perceptual learning for simple stimuli even when the task is demanding. As acknowledged by the authors, however, a limitation of this study was the possibility that the patients had more widespread damage beyond the MTL; a profile of atrophy our account predicts would yield deficits on demanding dot categorization tasks.
Setting aside the issue of how to explain our results in terms of purely psychological single-system models, a few other studies deserve mention. In particular, Hopkins et al. (2004b) recently reported normal procedural spatial sequence learning using a computerized radial-arm maze in hypoxic individuals with hippocampal damage. In contrast, the patients performed poorly on a similar declarative task in which sequences had to be explicitly remembered. Although there are some outstanding issues about this study (such as the measures used to compare performance on the two tasks), the finding, when considered in the context of our patients’ inability to discriminate spatial scenes, raises the question of what aspects of spatial processing are dependent on the hippocampus. We proposed that the hippocampus is necessary for representing the complex conjunctions that constitute a spatial scene (Lee et al., 2005c), but additional work is necessary to determine under what circumstances the hippocampus is recruited during spatial tasks. Such studies would also be useful in understanding whether spatial impairments can explain deficits on probabilistic category learning paradigms (Hopkins et al., 2004a) and tests of artificial grammar learning (Zaki, 2004), neither of which have an overt spatial demand. Additional limitations of our study are the small sample size and the possibility that the deficits reported here are not attributable to hippocampal damage (two patients had mild involvement of the parahippocampal gyrus). Inconsistent with this, the case with a selective hippocampal lesion also showed poor categorization and perceptual learning, and the common area of damage to all three cases was to the hippocampus (Fig. 1).
In summary, the data reported here are additional evidence of a role for the hippocampus in aspects of spatial processing and memory (O’Keefe, 1976; O’Keefe and Nadel, 1978; Gilbert et al., 1998; Spiers et al., 2001; Astur et al., 2002; Burgess and O’Keefe, 2003; McGregor et al., 2004; Pearce et al., 2004) but extend these investigations to reveal clear influences of stimulus type on nondeclarative tasks not thought to be dependent on the hippocampus (Turk-Browne et al., 2006). Additional investigations of aspects of nondeclarative memory in amnesia are clearly necessary, to both clarify the nature of the impairment in the patients and identify whether a similar pattern would be evident on other measures of nondeclarative memory. The current data are, however, inconsistent with any model of memory that regards declarative memory as the sole domain of MTL structures (Eichenbaum, 2001, 2004; Squire et al., 2004) and highlight the need to consider stimulus type as a critical influencing factor in the performance of amnesic patients in tests of basic perceptual discrimination as well as memory.
Footnotes
-
This work was supported by the Alzheimer’s Research Trust and the Medical Research Council. We are grateful to the patients and control volunteers for their participation in this study.
- Correspondence should be addressed to Dr. Kim S. Graham, Medical Research Council Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK. Email: kim.graham{at}mrc-cbu.cam.ac.uk