Abstract
Humans can consciously project themselves in the future and imagine themselves at different places. Do mental time travel and mental space navigation abilities share common cognitive and neural mechanisms? To test this, we recorded fMRI while participants mentally projected themselves in time or in space (e.g., 9 years ago, in Paris) and ordered historical events from their mental perspective. Behavioral patterns were comparable for mental time and space and shaped by self-projection and by the distance of historical events to the mental position of the self, suggesting the existence of egocentric mapping in both dimensions. Nonetheless, self-projection in space engaged the medial and lateral parietal cortices, whereas self-projection in time engaged a widespread parietofrontal network. Moreover, while a large distributed network was found for spatial distances, temporal distances specifically engaged the right inferior parietal cortex and the anterior insula. Across these networks, a robust overlap was only found in a small region of the inferior parietal lobe, adding evidence for its role in domain-general egocentric mapping. Our findings suggest that mental travel in time or space capitalizes on egocentric remapping and on distance computation, which are implemented in distinct dimension-specific cortical networks converging in inferior parietal lobe.
SIGNIFICANCE STATEMENT As humans, we can consciously imagine ourselves at a different time (mental time travel) or at a different place (mental space navigation). Are such abilities domain-general, or are the temporal and spatial dimensions of our conscious experience separable? Here, we tested the hypothesis that mental time travel and mental space navigation required the egocentric remapping of events, including the estimation of their distances to the self. We report that, although both remapping and distance computation are foundational for the processing of the temporal and spatial dimensions of our conscious experience, their neuroanatomical implementations were clearly dissociable and engaged distinct parietal and parietofrontal networks for mental space navigation and mental time travel, respectively.
Introduction
The link between the two fundamental dimensions of our reality—space and time—is foundational for cognitive neuroscience (Gallistel, 1990; Dehaene and Brannon, 2010; Buzsáki and Moser, 2013). The representation of time has been hypothesized to be integrated in a common mental magnitude (Walsh, 2003), construed to spatial primitives (Boroditsky, 2000; Lakoff and Johnson, 2005) and spatialized (Arzy et al., 2009a; Bonato et al., 2012) so that temporal ordinality (before/after) may map to spatial polarity (left/right). To date, however, the relationship between the mental representations of space and time remains more descriptive than explicative. Here, we asked whether the temporal and spatial dimensions of mental events undergo such domain-general processing.
The ability to imagine oneself in the past or in the future is called mental time travel (Tulving, 2002; Suddendorf et al., 2009) and has been characterized by chronometric effects. During mental time travel, participants are typically slower and less accurate in ranking an event as happening before or after a mental self-position in time (Arzy et al., 2009b). This effect has been hypothesized to result from the imagery of the self at different times and referred to as “self-projection in time” (Arzy et al., 2009b). Regardless of the mental self-position, participants are also typically faster and more accurate in ranking events that are further away from the mental self-position (Arzy et al., 2009a), a pattern consistent with canonical symbolic distance effects (Moyer and Bayer, 1976) suggesting that the distances between events are automatically computed. Under the hypothesis of a common cognitive system for the representation of time and space, we predicted comparable chronometric patterns in a mental space navigation task designed to be cognitively analogous to the mental time travel task used here. Mental time travel and mental space navigation were predicted to share neural activation patterns on the basis of recent findings studying distance effects (Parkinson et al., 2014; Peer et al., 2015).
To test these hypotheses, 17 participants took part in an event-related fMRI study during which they judged the ordinality of remembered events with respect to an imagined self-position in time or in space or, in control trials, estimated their emotional valence. In a given experimental block, a series of eight temporal, spatial, or emotional judgments was preceded by a self-positioning cue. The self-positioning cue instructed participants to mentally position themselves in time or in space (Fig. 1A). Participants were then prompted with one of three possible response pairs indicating the judgment to be done: before/after (TIME), west/east (SPACE), and positive/negative (CTRL). Following the presentation of the response pair, an event selected from a distribution of historical events was displayed on the screen (Fig. 1B). The distribution of historical events was centered on the imagined self-position to ensure an equal probability of occurrence of before/west/positive and after/east/negative answers (Fig. 1C). Although mental time travel and mental space navigation were hypothesized to yield overlapping neural activity, we also tested whether self-projection and distance effects would reveal distinct networks for mental time travel and mental space navigation.
Materials and Methods
Participants.
Seventeen healthy volunteers (age 24 ± 4 years old; 8 males), right-handed, with normal or corrected-to-normal vision and with no history of psychological disorders participated in this study. All subjects lived in the Parisian region. All participants were compensated for their participation and provided written informed consents in accordance with the Ethics Committee on Human Research at the Commissariat à l'Energie Atomique et aux Energies Alternatives (DRF/I2BM, NeuroSpin, Gif-sur-Yvette, France) and the Declaration of Helsinki 2008.
Design and procedure.
At least 48 h before the experiment, participants were provided with a list of events with their historical description, dates, and locations on a world map centered on Paris (https://www.google.ca/maps, June 1, 2013). Participants were asked to study the list and informed that their knowledge would be tested before the fMRI experiment. On the day of the neuroimaging experiment, participants filled a questionnaire and rated their recollection of each event by selecting “sure,” “not sure,” or “forgotten.” Events were presented in random order across participants.
During the neuroimaging experiment, stimuli consisted in viewing white words on a black background presented via a back-projection display (1024 × 768 resolution, 60 Hz refresh rate). The PsychToolbox-3 (http://psychtoolbox.org/) was used for presenting stimuli, recording behavioral responses, and synchronizing experimental timing with the scanner pulse timing. Stimuli covered on average 1.3 × 8.9° of visual angle. Participants were instructed to maintain fixation on a central cross when no words were presented.
The experiment consisted in a fast event-related design of 76 blocks grouped in four acquisition runs. Each block started with the presentation of the spatiotemporal self-position cue at which participants mentally imagine themselves. Over the 76 experimental blocks, participants could be asked to imagine themselves now in Paris (NOW, HERE), 9 years ago in Paris (PAST, HERE), in 9 years in Paris (FUTURE, HERE), now in Cayenne (NOW, WEST), or now in Dubai (NOW, EAST). Each possible self-position comprised 16 blocks, except for the (NOW, HERE) condition, which comprised 20 blocks (Fig. 1C). The presentation of the self-position cue (2 s) was followed by a fixation cross lasting 3 s and, then, by a succession of 8 trials (Fig. 1A). One trial consisted of one of three possible tasks: a spatial judgment task akin to mental space navigation (SPACE) in which participants indicated whether a historical event occurred to the west or east of the imagined self-position; a temporal judgment task akin to mental time travel (TIME) in which participants indicated whether a historical event occurred before or after the imagined self-position; a valence judgment (control condition, CTRL) designed to control for strategic and procedural aspects of the 2-AFC task as well as memory retrieval, in which participants indicated whether they considered the historical event to have a positive or negative valence. In a given trial, the question indicating the dimension of the judgment (Before/After? for TIME, West/East? for SPACE, and Positive/Negative? for CTRL) was presented on the screen for 0.9 s and followed by a varying interstimulus interval ranging from 0.220 to 0.330 s. The historical event (e.g., Olympic Games) was then presented for 0.9 s after which participants were given up to 2.5 s to give their answer. The next trial followed after a 2 s intertrial interval. On average, a trial lasted 5 s. Hence, in the 76 blocks, a total of 104 trials (2 × 24 for TIME + 2 × 24 for SPACE + 8 for CTRL) were collected for each condition and 152 trials (2 × 36 for TIME + 2 × 36 for SPACE + 8 for CTRL) for the (PRESENT, HERE) condition. TIME, SPACE, and CTRL trials were randomized inside each block. Self-position was fixed for each block and randomized between two successive acquisition runs.
Judgments were given by a button press with the thumb using FORP buttons (Current Designs) in both hands. Participants responded with their left hand for Before, West, and Positive, and with their right hand for After, East, and Negative in the two first acquisition runs; the response-hand mapping was reversed for the two last acquisition runs to alleviate cognitive-motor mapping confounds in the behavioral and the fMRI data. Accuracy and speed were equally emphasized.
Historical events.
Thirty-six historical events were used (Fig. 1B; Table 1). Past events were public and well-known events to avoid subject-specific recollection bias as much as possible; similarly, future events were highly probable events, such as future construction sites or upcoming sport events to reduce the impact of event probability with temporal distance. The final set of historical events covered a span of 54 years centered on 2013 (year at which fMRI data were collected) and 320 angular degree of longitude centered on Paris (experiment location). During the fMRI scans, historical events were distributed uniformly across dates and longitudes and according to the imagined self-position and the dimension: for example, when participants imagined themselves 9 years ago and made a temporal judgment, the event set was centered on the year 2004 and comprised 24 events. Similarly, when participants imagined themselves in Cayenne and produced a spatial judgment, the event set was centered on Cayenne and comprised 24 events (Fig. 1C).
MRI data acquisition and analysis.
Anatomical MRI and fMRI were acquired using a 3T MRI scanner (TIM Trio; Siemens). Each participant underwent 7 min anatomical imaging using a T1-weighted MPRAGE sequence (160 slices; repetition time, 2300 ms; echo time, 2.98 ms; FOV, 256; voxel size, 1 × 1 × 1 mm3) and four sessions of 15–20 min of functional imaging with a 32-channel head-coil and high-resolution multiband imaging (Xu et al., 2013) with multiband factor = 3, GRAPPA factor = 2, 69 interleaved axial slices, voxel size 2 × 2 × 2 mm3, repetition time = 1.5 s, echo time = 30 ms.
We used statistical parametric mapping (SPM8; Wellcome Trust Centre for Neuroimaging, London; http://www.fil.ion.ucl.ac.uk/spm/) for image preprocessing with slice timing correction, realignment, coregistration with the structural image, normalization to MNI stereotactic space, and spatial smoothing with a 5 mm FWHM isotropic Gaussian kernel. A two level analysis was implemented. In a first level analysis, fixed-effect GLMs were estimated subject-wise on the basis of a design matrix that covered all four fMRI sessions and included boxcar regressors, ranging from question onset to event offset, convolved with a canonical hemodynamic response function, for every experimental condition as well as for nuisance covariates from the motion parameters, their first derivatives, and acquisition runs. Single-subject GLMs were estimated for all voxels comprised in a liberal subject-wise gray matter mask (index > 0.1). A high-pass filter (128 s cutoff) was applied to remove slow drifts unrelated to the paradigm. Each trial was modeled as a regressor specifying the following combined parameters: the five spatiotemporal self-positions used in a given block: NOW in PARIS, NOW at WEST, or EAST of PARIS (i.e., in Cayenne or Dubai), PAST or FUTURE in PARIS (i.e., 9 years ago or 9 years in the future), the task (TIME, SPACE), the relative position of the historical event (relative west/east, relative past/future, not analyzed in this study), and the distance separating the imagined self-position and the historical event in the trial (egocentric distance) split in four bins (very close, close, far, and very far). For instance, one trial could be as follows: self-position = PAST, task = TIME, side = relative west, distance = close. In the model, the control task was combined with the five spatiotemporal self-positions only. The five self-position presentations were modeled separately. Hence and overall, 51 regressors could be included in an acquisition block, due to the across-blocks condition randomization. In a second level analysis, individual contrast images for each of the experimental conditions (first level regressor) relative to rest were entered into a second level whole-brain ANOVA (random effect model) with experimental conditions as within-subject factor. Then, T contrasts are estimated as a linear weighted combination of effect of interest. Standard thresholds were applied for whole-brain T contrasts: second level contrast maps were thresholded with a voxelwise threshold of p < 0.001. Then, FDR correction was performed by setting a cluster size threshold: only clusters with a clusterwise FDR-corrected p value inferior to 0.05 were kept. T contrast of interest were TIME > SPACE, SPACE > TIME, TIME > CTRL, SPACE > CTRL, TIME: PAST + FUTURE > NOW, SPACE: WEST + EAST > HERE, TIME: CLOSE > FAR, and SPACE: CLOSE > FAR.
To perform conjunction analyses, we used the Minimum Statistic Compared to the Conjunction Null (Nichols et al., 2005), implemented in SPM8, on a second level random-effect ANOVA model in which the relevant T contrasts were selected. We use the same procedure of multiple comparisons that, for single T contrast, using a voxel level threshold of 0.001 uncorrected with a FDR correction for cluster level at p < 0.05. Z-statistics, corrected clusterwise p values, and cluster extents are provided in Tables 3⇓⇓–6.
Prescan interviews.
Participants were asked to recall the dates and locations of historical events provided to them 1 week before taking part in the fMRI acquisition. Participants correctly dated (12.4% mistaken date dates, mean error: 0.04 years) and localized events on the world map (3.5% of locations outside the country they belong to, and 7.3% lightly misplaced). Significantly more recall errors occurred for future compared with past events (mean difference = 5.56%, z = 2.2, p = 0.03) and for locations east of Paris compared with locations west of Paris (mean difference = 9.06%, z = 3.5, p = 0.004). Reported and veridical dates revealed no significant differences in terms of absolute value, suggesting that most errors were close approximation of veridical dates. Additionally, the more temporally distant events were in time from the present, the more they were forgotten (temporally very far minus very close mean difference: 7.6%, z = 1.98, p = 0.047). Hence, and together, 11.6% of behavioral data were disregarded from all reported analyses.
Behavioral data analysis.
Linear mixed-effects (LME) models were performed using R (R Foundation for statistical computing) with lme4 and LmerTest packages (Bates et al., 2015). Two LMEs were fitted: one for reaction times (RT) and one for error rates (ER). LME contains both random and fixed factors. We tested four fixed factors for each model: temporal distance (TD, continuous), spatial distance (SD, continuous), the dimension of the task (DIM, categorical, 2 levels: time and space), and self-location (SL, categorical, 3 levels). For this latter factor, past and future self-locations were combined as well as west and east locations so that the three levels were “HERE, NOW,” “HERE, PAST + FUTURE,” and “WEST + EAST, NOW,” based on the absence of difference observed in previous studies using the same design (Arzy et al., 2009a; Gauthier and van Wassenhove, 2016). In each tested model, random effects were modeled for subjects (SUBJ factor, 17 levels) and events (EVENT factor, 36 levels) in addition to fixed effects. This was used to solve the problem of whether subject- or item-related variance can be modeled as random effects. Rather than performing two different repeated-measures ANOVAs (with event and with subject random effects), we chose to conjointly account for subject- and item-related variance with LME (Baayen et al., 2008). This means that, by considering that both subjects and events are drawn from a much larger population, we can generalize our findings to any subjects and events. In the LME modeling RT, only correct trials were analyzed (accounting for 83% of RT data). Moreover, in typical ANOVAs, the incremental addition of factors in a statistical model yields an overfitting of the data. To circumvent this issue, we used a statistical model selection strategy based on Akaike Information Criterion score. We used a forward model selection strategy, starting from a null model containing only random effects and iteratively selected one-by-one fixed effects providing the model with the lowest Akaike Information Criterion and passing a χ2 test (α = 0.05) compared with previous model. Planned comparisons and post hoc tests were then derived from the selected final model with LmerTest package. For this, reported p values were relative to an intercept baseline condition chosen here to be the physical references, namely: SL = NOW, HERE, estimated with a Markov Chain Monte Carlo and estimated degree of freedom was following Satterwaithe approximation (Bates et al., 2015).
Results
Self-position, distances, and dimensions (space and time) significantly affected RT and ER with respect to subject- and event-related variance as assessed by mixed model selection for effects of interest (Table 2). Assessing individual significance of selected factors revealed no main effects of dimension on RT (DIM, t = −0.89, p = 0.38) or on ER (t = 0.44, p = 0.66), suggesting that the temporal and spatial conditions were balanced for task difficulty. In the following results, reported differences between TIME and SPACE are thus unlikely to be related to differences in task difficulty or task execution.
Self-projection and distance effects in temporal and spatial judgments
Self-projection in time is typically associated with slowed RT and increased ER (Arzy et al., 2009b; Gauthier and van Wassenhove, 2016). As predicted, temporal judgments were slowest after self-projection in time (PAST + FUTURE vs NOW, t = 3.77, p < 0.001: Fig. 2A); similarly, spatial judgments were slower when participants mentally self-positioned in Cayenne or Dubai compared with in Paris (WEST + EAST vs HERE, t = 5.11, p < 0.001; Fig. 2A). Additionally, self-positioning in space increased ER during spatial judgments (WEST + EAST vs HERE, t = 5.10, p < 0.001; Fig. 2B), and self-positioning in time increased ER during temporal judgments (PAST + FUTURE vs NOW, t = 4.11, p < 0.001; Fig. 2B). Together, these results support the existence of self-projection mechanisms in the time dimension (Arzy et al., 2009b), but also in the spatial dimension: mental self-positioning away from the here and now is associated with a behavioral cost (Gauthier and van Wassenhove, 2016). The behavioral costs of self-projection in space and in time were not significantly different for RT (paired t test, t(32) = −1.27, p = 0.21) and ER (paired t test, t(32) = 0.63, p = 0.53), allowing to discard trivial effects of task difficulty when comparing the neural correlates of self-projection in time for mental time travel, and self-projection in space for mental space navigation. Additionally, temporal judgments were slower for self-positions away from Paris (WEST + EAST vs HERE, t = 3.04, p < 0.005), although spatial judgments were not significantly slowed down when participants self-positioned away from the present (PAST + FUTURE vs NOW, t = 1.04, p = 0.3). Chronometric space-time interactions may index a competition between the psychological dimension of space and time during self-projection (Gauthier and van Wassenhove, 2016): mental events may be represented egocentrically along their temporal or spatial dimension but not both simultaneously. This working hypothesis is next explored with fMRI.
Regardless of self-position in time and space, the further away events were from the mental self-position, the faster the temporal (TD(TIME), t = −6.72, p < 0.001) and the spatial (SD(SPACE), t = −3.37, p < 0.001) judgments were. There were no significant effects of spatial (temporal) distance on temporal (spatial) judgments (SD(TIME), t = −0.25, p = 0.07; TD(SPACE), t = −1.82, p = 0.8), suggesting that distance effects were dimension-specific (TD × DIM, t = 4.26, p < 0.001 and SD × DIM, t = 2.43, p < 0.05). As observed with self-projection, the behavioral costs of spatial and temporal distance computations were not significantly different for RT (paired t test, t(32) = 0.46, p = 0.57) or ER (paired t test, t(32) = 0.35, p = 0.72), allowing to discard trivial effects of task difficulty when comparing the neural correlates of temporal distance in mental time travel, and spatial distance in mental space navigation. Additionally, spatial distances produced a stronger slowing down of RT when participants were self-positioned away from Paris (SD × (WEST + EAST vs HERE), t = −2.2, p < 0.05). Distance effects were found for ER of spatial judgments with spatial distance (SD, t = −2.03, p < 0.05) but not for temporal judgments with temporal distance (t = 0.1, p = 0.9). Self-positioning in space yielded a faster decrease of ER with spatial distance for spatial judgments (SD × (WEST + EAST vs HERE), t = −3.8, p < 0.001) and self-positioning in time yielded a faster decrease of ER with temporal distance for temporal judgments (TD × (PAST + FUTURE vs PRESENT), t = −3.22, p = 0.001), consistent with previous findings (Gauthier and van Wassenhove, 2016).
As increases of distances between the self and mental events systematically fastened RT and decreased ER, we hypothetically operationalized self-projection as the egocentric remapping of events stored as allocentric memories, and this was specifically tested with fMRI.
Overlapping networks for mental navigation in time and space
First, to reveal the core network mediating mental travels in time and space, we examined specific regions engaged in the processing of mental time (TIME) and mental space (SPACE) with respect to a control condition (emotional valence judgment, CTRL). TIME and SPACE were each contrasted with CTRL (Fig. 3A; Table 3, peak voxel coordinates). Additionally, to uncover regions engaged in both temporal and spatial processing, a conjunction analysis was performed (Fig. 3, activations in purple; Table 3, last two columns). Ten activation clusters overlapped bilaterally in the conjunction analysis: the inferior parietal lobule/angular gyrus (IPL/AG), the medial precuneus/retrospenial cortices (PCUN/RSC), the medial precuneus/superior parietal lobules (PCUN/SPL), the superior frontal cortices (sFront), right posterior cingulate (PCC), and frontopolar cortices (Fig. 3A), covering 65% of the volume of TIME > CTRL activations (1345 of 2080 voxels), and 30% of the SPACE > CTRL activations (1045 of 4569 voxels). Among these 10 regions commonly activated by mental travels in time and in space against control, the statistically significant effects observed in frontopolar cortex and the PCC were driven by the differences in negative betas, suggesting that these two regions were not part of the core network of mental travels, but rather idiosyncratic to the chosen control. Conversely, a few regions were found in only one contrast: bilateral parahippocampal cortices (PHC) and inferior temporal gyri (iT) were found only in the SPACE > CTRL contrast and mainly driven by positive betas for SPACE, whereas right intraparietal sulcus (IPS) and dorsolateral prefrontal cortex (dlPFC) were found only in the TIME > CTRL contrast and largely driven by negative betas for CTRL (Fig. 3B). Among the 8 core regions found to be specific to mental travels, SPACE tends to elicit higher activation levels than TIME especially in the parietal lobe.
In sum, we found a parietofrontal network specifically supporting mental time travel and mental navigation in space, independently of the chosen 2-AFC task on memorized events. The differences found between TIME and SPACE were further investigated and quantified through direct TIME versus SPACE whole-brain contrasts.
TIME versus SPACE: dimension-specific networks
To have a general overview of the specificity of mental time travel and mental space navigation, we performed direct contrasts between all TIME and all SPACE trials regardless of all other parameters. Contrasting TIME and SPACE yielded two networks that were consistent with those observed in the contrasts for self-projection and distance (Fig. 4; Table 4). SPACE > TIME contrast yielded large posterior activations in bilateral PCUN/RSC, PHC and AG/transverse occipital sulcus, iT but also frontal activations in bilateral sFront (BA6/BA8), left IFG and left middle frontal gyrus (MFG). When contrasting TIME > SPACE, a large parietofrontal network was found: bilateral posterior IPS and bilateral frontal activations along the dlPFC, and left presupplementary motor area (pre-SMA). In addition to parietofrontal activations, bilateral iT, left lingual gyrus, and left cerebellar activations were also found.
Overall, the direct contrast between mental time travel (TIME) and mental space navigation (SPACE) showed that spatial processing capitalizes specifically on medial temporal and posterior parietal regions, whereas temporal processing operates through a widespread network importantly comprising prefrontal regions.
Dimension specificity of brain activations for self-projection and distance
The behavioral effects found for self-projection and relative distance effects were dimension-specific. Thus, we first assessed brain activation patterns capturing self-projection in time and self-projection in space separately. For this, we contrasted TIME trials when participants self-projected (PAST + FUTURE) with TIME trials in which no self-projection was required (NOW). Corresponding peak voxel coordinates are reported in Table 5. Self-projection in time elicited activation in bilateral IPS, IPL/AG, sFront, several areas of prefrontal cortices (dlPFC, pars orbitaris), anterior insula (aIns), iT, the anterior part of the pre-SMA, and medial and lateral regions of the cerebellum (Fig. 5A). We then contrasted SPACE trials during self-projection (WEST + EAST) with SPACE trials without self-projection (HERE). Self-projection in space mainly elicited parietal activations: IPS/AG, PCUN/RSC, and PCUN/SPL, with the notable exception of the right superior frontal cortex (Fig. 5A). To reveal the regions commonly engaged by self-projection in time and in space, a conjunction analysis was performed and revealed that right IPL/AG, PCUN/SPL, and superior frontal cortices were significantly activated in both contrasts (Fig. 5A; Table 5). These three regions accounted for 4% of the clusters found in PAST + FUTURE > NOW contrast (290 of 7116 voxels, 3 of 21 clusters) and 37% of the clusters found in WEST + EAST > HERE contrast (290 of 779 voxels, 3 of 6 clusters).
In sum, self-projection in time elicited a widespread activation of parietal, frontal, temporal, and cerebellar cortices, whereas self-projection in space essentially engaged the parietal lobe. In the conjunction analysis, only 3 regions were found that covered a small volume of the two networks. For these reasons, we consider that self-projection in time and in space mainly operates on dimension-specific substrates.
We then assessed brain activity corresponding to the dimension-specific distance effects observed behaviorally by investigating separately the temporal distance effect in TIME and spatial distance effect in SPACE. For this, we contrasted TIME trials performed on the temporally closest events (TIME: CLOSE) with those performed on the furthest ones (TIME: FAR). Similarly for SPACE, trials performed on the spatially closest events (SPACE: CLOSE) were contrasted with those performed on the spatially furthest ones (SPACE: FAR). Temporal distance effects were restricted to the right IPL/AG and the right aIns (Fig. 5B; Table 6). By contrast, spatial distance effects included a large network: bilateral IPL/AG, aIns, PCUN/SPL, PCUN/RSC, IPS, right superior frontal cortex, pre-SMA, rostrolateral PFC, iT, and left cerebellum (Fig. 5B). A conjunction analysis showed that right IPL/AG and the right aIns were commonly engaged by distance in a CLOSE-FAR contrast regardless of the dimension of the task (Table 6). These two regions accounted for 72% of the clusters found in TIME: CLOSE > FAR contrast (94 of 129 voxels, 2 of 2 clusters) and 3% of the clusters found in SPACE: CLOSE > FAR contrast (94 of 3441 voxels, 2 of 16 clusters). Bilateral IPS, right iT, right sFront, right pre-SMA, and left cerebellum were all differentially engaged across the time and space dimensions as well as the two cognitive task components (i.e., self-projection and distance). Hence and overall, if temporal distance was almost contained in the network responding to spatial distances, the converse was not true. Spatial distances elicited a widespread network encompassing parietal, temporal, and frontal cortices.
Together, we observed a partial dissociation between TIME and SPACE in self-projection and a partial dissociation between TIME and SPACE in distance computation. Additionally, both operations partially dissociated: self-projection in time covered 90% (7116/(7116 + 779) voxels) of the activated volume over the two self-projection contrasts, whereas distance in space covered 96% (3441/(3441 + 129) voxels) of the activated volume over the two distance contrasts.
The overall results indicate that mental time travel and mental space navigation build on largely dissociable neural substrates for two cognitive operations that are self-projection and distance computation. Remarkably, the dissociation between TIME and SPACE networks across these cognitive components is consistent with the results obtained with their direct contrasts reported in the previous section. These observations suggest that the intrinsic differences in neural activation between mental time travel and mental space navigation can be specifically attributed to the implementation of self-projection and distance computations.
Right IPL/AG, a hub for setting and monitoring egocentric maps
Although large dissociations were found between the mental processing of time and space, and across self-projection and distance computation, a small region of IPL was consistently found in each contrast conjunction (self-projection: 176 over 266 voxels of IPL cluster; and distance: 58 over 74 voxels). A whole-brain conjunction analysis between self-projection contrasts and egocentric distances contrasts, in time and in space, revealed that a portion of the right IPL located in BA39 (x = 42, y = −70, z = 40, Z = 3.91, extent = 46) was specifically involved in self-projection and distance computation in both TIME and SPACE (Fig. 6). This finding indicated that this subregion in IPL likely supports a cognitive operation that is central to mental travels, regardless of its temporal or spatial dimension. These results also suggested that self-projection and relative distance computation may heavily rely on the abstract coding of egocentric maps in this subregion. Additionally, the inferior part of the right IPL was more activated for spatial content and the superior part for temporal content as assessed by direct SPACE versus TIME contrasts (Fig. 4), self-projection contrasts (Fig. 5A), and distance contrasts (Fig. 5B), suggesting a regional specialization of IPL.
Discussion
In this study, subjects performed a mental time travel and a mental space navigation task while undergoing fMRI. The main findings are that self-projection and distance computations engaged distinct brain networks for the processing of the temporal and the spatial dimensions of events, with the exception of activity in the right IPL, which captured egocentric distance but also self-projection in these two dimensions. We discuss below how the observation of distinct neural networks suggests that similar cognitive operations (here, self-projection and distance computations) may operate in different cognitive maps (here, temporal and spatial cognitive maps).
Domain specificity of egocentric mapping in mental time and in mental space
The robust behavioral cost associated with self-projection is consistent with the literature on mental time travel (Arzy et al., 2008, 2009b) and mental space navigation (Gauthier and van Wassenhove, 2016): a change of mental reference point increases the behavioral cost, whereas the relative distance between an event and the mental reference decreases it. Importantly, this pattern is the inverse of the one found in the seminal mental rotation studies reporting increased behavioral cost with distance (Shepard and Metzler, 1971; Kosslyn et al., 1978), suggesting that mental time and space travels cannot be reduced to mental imagery. Rather, it was recently hypothesized that self-projection could be operationalized as an egocentric remapping of the temporal or spatial cognitive maps (Gauthier and van Wassenhove, 2016). In other words, the here and now would be a vantage point not only for actually experienced journeys but also for mental ones (Liberman and Trope, 2008).
In our study, the activation of both retrosplenial and posterior parietal cortices during mental space navigation supports the hypothesis of egocentric remapping, even in the absence of real (Ghaem et al., 1997) or virtual (Committeri et al., 2004) spatial experience. Damage in posterior parietal cortex is known to impair egocentric navigation and memory (Aguirre and D'Esposito, 1999; Wilson et al., 2005; Ciaramelli et al., 2010); and more recently, the imagined spatial locations and heading have been inferred from activation patterns in RSC (Marchette et al., 2014; Sulpizio et al., 2016). This is in agreement with the BBB model (Burgess et al., 2001a; Byrne et al., 2007), in which allocentric coordinates of the self-position in the hippocampus and landmark positions in the parahippocampus are converted into egocentric frames in retrospenial cortex, a hypothesis also supported by recent empirical evidence (Zhang et al., 2012; Dhindsa et al., 2014). Also consistent with the egocentric hypothesis, the hippocampal and parahippocampal cortices were not significantly engaged by self-projection in space.
Self-projection in TIME entailed a frontoparietal network that was largely distinct from the one observed in SPACE, suggesting a different cognitive mapping scheme. The network was consistent with the widespread dlPFC activations previously reported during mental time travel (Botzung et al., 2008; Nyberg et al., 2010; Viard et al., 2011a). Additionally, the bilateral insular and pre-SMA activations were only found for self-projection in TIME: the insula has been associated with emotional valence (Craig, 2009; Viard et al., 2011a) and the representation of self (Arzy et al., 2009b). Functionally coupled with the anterior insula (Cauda et al., 2011), pre-SMA also contributed to the same network. As no significant engagement of the hippocampus was found, self-projection in time appears to capitalize on high level neocortical structures with a privileged link to the representation of the self.
A direct contrast between SPACE and TIME confirmed the neural dissociations: whereas space processing was mostly confined to posterior regions, time processing mostly recruited frontal regions, consistent with previous reports on the temporal and spatial ordering of memories (Schedlbauer et al., 2014). Hence, one possibility is that, unlike spatial processing for which dedicated structures enable egocentric remapping, setting a new temporal viewpoint may require online computations implicating executive functions, in agreement with mental time travel as an evolutionary novel form of mental manipulation (Suddendorf et al., 2009).
In sum, and consistent with recent psychophysical predictions (Gauthier and van Wassenhove, 2016), self-projection or egocentric remapping appears to be a major contributor to the differentiation of mental time and space processing.
Dedicated networks for dimension-specific egocentric distances
Psychological distances are mental metrics representing the distance of various psychological dimensions (e.g., time, space, or social distances) with respect to the self (Liberman and Trope, 2008, 2014). Interestingly, no behavioral interferences between temporal and spatial distances were observed, supporting the notion that time and space are representationally distinct (Gauthier and van Wassenhove, 2016).
A widespread parietofrontal network was related to by spatial distance in SPACE (Fig. 4B), functionally divided into two classes of regions with respect to self-projection and distance. The first class overlapped with the network activated by self-projection in space (PCUN/RSC, PCUN/SPL, and IPL) and reflected both real-to-imagined position and self-to-event egocentric spatial distances. The second, comprising IPS, left PHC, right temporal cortex, right prefrontal cortex, bilateral insula, and right pre-SMA, was exclusively related to the self-to-event egocentric spatial distances. This suggests that, although the first network may encode distance egocentrically (Byrne et al., 2007), the second may be related to the allocentric coding of distance notably implicating the parahippocampus. This interpretation would be in line with the recent proposal of a network-based allocentric coding of space (Ekstrom et al., 2014). Alternatively, this network could relate to task demands: goal-directed virtual navigation reported similar distance modulation of precentral cortex, middle temporal gyrus, and insula, associated with behavioral costs (Epstein, 2008; Viard et al., 2011b; Sherrill et al., 2013). The bilateral IPS and prefrontal activations have often been reported in symbolic and nonsymbolic magnitude processing (Fias et al., 2007; Piazza et al., 2007), suggesting that spatial distances could be processed through such quantitative system. Although these regions were not found in the distance contrasts for TIME, the IPS was seen in the self-projection contrast: one possibility is that self-projection in time entails the elaboration of a temporal cognitive map for events, in an egocentric frame of reference, and onto which time events are maintained along with their magnitudes. It is noteworthy that neither the spatial navigation system nor the general language (Friederici and Gierhan, 2013) or the known arithmetic network (Arsalidou and Taylor, 2011) appeared to fully overlap with the regions we observe here for mental time travel.
A domain-general representation in IPL?
The finding of common activations in parietal and frontal cortices suggests a common processing of abstract relations computed within egocentric maps. PCUN/SPL (Burgess et al., 2001b; Committeri et al., 2004; Lambrey et al., 2012) and superior frontal cortex (BA6/BA8) activations have previously been associated with changes in spatial perspective (Committeri et al., 2004; Sulpizio et al., 2016) and in temporal self-location during mental time travel (Nyberg et al., 2010; Viard et al., 2011a). These two regions may serve egocentric remapping of mental events by directing attention to internal representations (Wagner et al., 2005) according to retrieval goals (Cabeza, 2008). In other words, these two regions may support a common workspace for the computation of relations between event representations within a cognitive map.
Here, the right IPL/AG was responsive to both self-projection and egocentric distances, thus appearing to be a key area for egocentric remapping and for metric computations. This finding is consistent with recent reports (Parkinson et al., 2014) and with the observation that lesions of AG impair the processing of visual egocentric frames in near space (Bjoertomt et al., 2009; Lane et al., 2015). The activation of the right IPL/AG has been reported during mental time travel (Botzung et al., 2008; Viard et al., 2011a) together with the perception of temporal order (Battelli et al., 2007). The right IPL/AG may thus support the mapping of magnitudes (time, space, quantity) with respect to an egocentric reference so as to allow self-related ordinal judgments. Such mapping would reconcile our findings of similar behavioral distance effects in time and space, and strongly suggests that egocentric mapping in IPL is indeed abstract. Although activations were found in IPL for both TIME and SPACE, it is noteworthy that an inferior-to-superior regional specialization of IPL could be seen for space and time, respectively (Figs. 4, 5A). This gradient in IPL was consistent with a recent subject level fMRI study of mental orientation (Peer et al., 2015) and with resting-state functional connectivity parcellation of IPL where inferior IPL is connected only to the spatial network (Uddin et al., 2010; Yeo et al., 2011).
Together, these observations suggest distinct networks for time and space computations before a remapping in an abstract format, a hypothesis that will be addressed with time-resolved neuroimaging.
In conclusion, our findings nuanced the hypothesis of a common system for the representation of time and space. We showed that two important cognitive operations characteristic of mental time and space travel (namely, self-projection and distance computations) strongly dissociate in their neural implementations as a function of the task dimension (time or space). However, a robust overlap was found in a subregion of IPL, suggesting the possibility of a shared dimensionless (thus, abstract) egocentric mapping during mental time travel and mental space navigation. Such convergence may have emerged from shared constraints in anchoring the self in a common referential system.
Footnotes
This work was supported by European Research Council YStG-263584 and Agence Nationale de la Recherche 10JCJC-1904 to V.v.W. Preliminary data were presented at the annual meeting of the Society for Neuroscience (2014, Washington, DC), the annual meeting of the organization for Human Brain Mapping (2015, Geneva), and the annual meeting of the Cognitive Neuroscience Society (2016, New York). We thank the members of the Unité de Recherche en NeuroImagerie Applicative Clinique et Translationnelle and the medical staff at NeuroSpin for help in recruiting and scheduling participants; Karin Pestke for help with some of the data collection; Benoît Martin for some of the preliminary analysis; members of the Cognitive Neuroimaging Unit for fruitful discussions; Kamil Ugurbil, Essa Yacoub, Steen Moeller, Eddie Auerbach, and Gordon Junqian Xu (Center for Magnetic Resonance Research, University of Minnesota) for sharing pulse sequence and reconstruction algorithms; and Alexis Amadon for technical support.
The authors declare no competing financial interests.
- Correspondence should be addressed to Dr. Baptiste Gauthier, Université Paris-Saclay, F-91191 Gif/Yvette, France. gauthierb.ens{at}gmail.com