Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

A Neural Compass in the Human Brain during Naturalistic Virtual Navigation

Zhengang Lu, Joshua B. Julian, Geoffrey K. Aguirre and Russell A. Epstein
Journal of Neuroscience 20 August 2025, 45 (34) e1765242025; https://doi.org/10.1523/JNEUROSCI.1765-24.2025
Zhengang Lu
1Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
2Department of Psychology, New York University, New York, New York 10012
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Joshua B. Julian
1Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
3Princeton Neuroscience Institute, Princeton, New Jersey 08544
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Joshua B. Julian
Geoffrey K. Aguirre
4Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Geoffrey K. Aguirre
Russell A. Epstein
1Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Russell A. Epstein
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

A central component of wayfinding is the ability to maintain a consistent representation of one's facing direction (FD) when moving about the world. In rodents, head direction cells are believed to support this “neural compass,” but identifying a similar mechanism in humans during dynamic naturalistic navigation has been challenging. To address this issue, we acquired fMRI data from male and female human participants while they freely navigated through a virtual reality city. Encoding model analyses revealed voxel clusters in the posterior–medial cortex (the “retrosplenial complex”) and superior parietal lobule that exhibited reliable tuning as a function of FD. Crucially, these directional tunings were consistent across perceptually different versions of the city, spatially separated locations within the city, and motivationally distinct phases of the behavioral task. Analysis of the model weights indicated that these regions represent a broad range of possible FDs and that they do so by representing heading relative to the principal axis of the environment. These findings reveal specific mechanisms in the human brain that allow us to maintain a sense of direction during naturalistic, dynamic navigation.

  • encoding model
  • fMRI
  • heading direction
  • retrosplenial complex; spatial cognition; superior parietal lobe

Significance Statement

Theories of spatial navigation posit the existence of a “neural compass” that maintains one’s sense of direction when moving through the world. Head direction cells are believed to support this representation in rodents, but it has been challenging to identify a similar signal in humans using noninvasive methods. We use an encoding model analysis of fMRI data to identify brain regions—retrosplenial complex and superior parietal lobe—that represent facing direction when people perform a naturalistic task in which they dynamically navigate through a virtual reality city. Crucially, this directional signal is stable across versions of the city with different visual features, locations within the city, and phases of the behavioral task, thus exhibiting key characteristics of a “neural compass.”

Introduction

Neurons responsive to spatial quantities, such as heading, position, distances, and directions, have been identified in freely moving rodents through electrophysiological recordings (O’Keefe and Nadel, 1978; Grieves and Jeffery, 2017; Bicanski and Burgess, 2020). It is widely believed that such findings will ultimately provide an avenue for grounding high-level cognition in well-defined neurocomputational mechanisms (Burgess, 2014). For this promise to be fulfilled, however, it is necessary to draw strong connections between spatial codes observed in rodents and spatial codes observed in humans. This is challenging, because it is not easy to identify the neural correlates of spatial navigation in humans. Intracranial recording studies are limited by the fact that data collection is opportunistic, constrained in anatomical coverage, and restricted to clinical populations. fMRI studies are limited by the fact that the BOLD (blood oxygenation level-dependent) signal in fMRI has a sluggish response, making it difficult to assess spatial codes that typically change on a faster timescale during dynamic navigation.

In the current study, we attempted to surmount the limitations of fMRI by collecting data in a naturalistic navigation paradigm and analyzing it using voxelwise encoding models. This analysis method has been previously used to examine representational codes in complex dynamic stimuli with temporally overlapping features, such as movies and audiobooks (Nishimoto et al., 2011; Huth et al., 2016). Thus, we thought it would be useful for examining spatial representations during dynamic navigation. To demonstrate the feasibility of the method, we focused on representations of facing direction (FD). This quantity, sometimes referred to as “heading,” is equivalent to the head direction (HD) signal often studied in rodents. An organism that can represent FD/HD can use this signal as an internal “compass” that is a central element of many navigational strategies (Gallistel, 1990; McNaughton et al., 2006).

In rodents, HD cells have been identified in many brain structures, including the presubiculum, anterior thalamus, and retrosplenial cortex (Taube, 2007). In humans, heading-modulated neurons have been identified in the medial temporal lobe, but their properties have not been investigated in detail (Tsitsiklis et al., 2020). Neuroimaging studies have identified FD-related signals in several brain regions, including the posterior–medial cortex (within the functionally defined retrosplenial complex or RSC), superior parietal lobe (SPL), entorhinal cortex (ERC), and thalamus (Baumann and Mattingley, 2010; Vass and Epstein, 2013, 2017; Marchette et al., 2014; Chadwick et al., 2015; Shine et al., 2016; Kim et al., 2017; Kim and Maguire, 2019; Koch et al., 2020; Dutriaux et al., 2024). However, these studies have not typically examined heading codes during naturalistic navigation; instead, they have used trial-based designs in which heading is indicated by static stimuli or is artificially constrained for intervals during which heading does not change. One previous study used a voxelwise encoding model approach similar to the one employed here to identify heading-related codes during dynamic navigation (Nau et al., 2020). However, the virtual environment in that study was a single circular arena with constant perceptual features, making it infeasible to test whether the heading code was independent of the perceptual inputs or stable across different locations—crucial features of a “neural compass.”

Here we go beyond this earlier work by identifying heading codes in participants performing a naturalistic “taxi-cab” task in a large-scale virtual city containing multiple subspaces and goal locations. We examined fMRI responses within individual voxels for two signatures of heading codes: FD-related activity and FD-related adaptation. The former is modulation of the BOLD signal as a function of FD. The latter is a modulation of the BOLD signal as a function of the stability or change in FD. We then tested whether the FD-related activity codes exhibited key features of a “neural compass”: stability of heading representations across perceptual changes, task changes, and locations changes and representation of many possible headings across the entire angular range. To anticipate, our results indicate that two regions of the brain—RSC and SPL—code for FD during dynamic virtual navigation in a compass-like manner.

Materials and Methods

Participants

Fifteen individuals (six males; ages, 22–34 years; mean age, 27 years) were recruited from the University of Pennsylvania community and were paid for their participation. Written informed consent was obtained from each. All participants had normal or corrected-to-normal vision and reported that they were in good health with no history of neurological disease. Experimental procedures were approved by the University of Pennsylvania Internal Review Board.

Experimental design

Virtual cities

Two versions of a virtual city (City v1 and v2) were developed using the Source video game engine SDK, Hammer Editor (valvesoftware.com). The two versions were spatially identical, with the same layout of roads, walls, and buildings, but they had different surface textures on the buildings and roads to make them visually distinct. This design allowed the two versions of the cities to act as perceptual controls for each other when analyzing spatial codes. City v2 was described to the participants as a “post-apocalyptic” version of City v1.

The cities spanned 120 × 201 virtual meters and contained main streets, corridors, alleys, buildings, courtyards, a church, and statues. The edges of the cities were bounded by walls. A mountain outside the boundary was visible on the one side of the city, and apartment buildings outside the boundary were visible on another side (Fig. S1). Twelve stores within the cities served as goal locations. These were clearly indicated by distinct storefronts and signage, and they were identical and in the same locations in both city versions. The store names were Antique Store, Swan Books, Post Office, Bicycle Store, Hardware Store, Bakery, YMCA, Laundry, Smith's Bar, Comic Den, Starbucks, and Luigi's Pizza.

Behavioral training

Prior to scanning, participants completed two 1 h behavioral training sessions on a laptop computer to familiarize them with City v1 and the experimental task. During these sessions, participants navigated through the city from a first-person point of view and controlled their movements using arrow buttons on the keyboard (with the left arrow button for turning left, the right arrow button for turning right, and the up arrow button for moving forward). The first training session consisted of three stages. In Stage 1, participants freely explored the city for 15 min. In Stage 2, participants navigated between the 12 target stores in a random order. The name of a target store was presented at the bottom of the screen; participants navigated to the named store, and upon reaching it they were prompted to find another store. This stage of training was self-paced and terminated once all target stores were found.

In Stage 3, participants played a taxi-cab game. The game alternated between a search phase and a deliver phase. In the search phase, participants navigated through the virtual city until they found a passenger, who they picked up by walking to the passenger’s location. The name of a goal store then appeared at the bottom of the screen, indicating that participants were to deliver the passenger to that location. Upon reaching the indicated goal store, the name of a second goal store appeared, indicating that participants were to further deliver the passenger to the second goal. The cycle then began again with the participant searching for a new passenger. Passengers could appear in 24 possible locations that were distributed throughout the maze so that passenger locations appeared to be random from the point of view of the participant.

Participants played the taxi-cab game until the end of the 1 h session. They were then asked to return at their earliest convenience for Session 2 of behavioral training (14 out of 15 subjects came back within 2 d), during which they played the taxi-cab game on the laptop for an additional hour.

fMRI task

fMRI scanning took place over five separate scan sessions extending over a maximum of 3 weeks (mean, 13.4 d), except for one participant, who was scanned in six sessions. The first session began 0–4 d after the second training session. In all scan sessions, participants played the same taxi-cab game that they performed in the behavioral training.

Participants lay on their backs in the scanner and used a mirror mounted on the head coil to view a visual display presented at the end of the scanner bore on an InVivo SensaVue Flat Panel Screen at 1,024 × 768 pixels resolution (diagonal, 80.0 cm; width × height = 69.7 × 39.2 cm). The visual display subtended ∼18.9° × 10.7°. They controlled their movements with a four-button MR–compatible button box (with the left button for turning left, the second button for moving forward, and the third button for turning right). Participants were free to move their eyes and fixation was not required. Eye movements were not recorded. All experience of the environment was from the ground level: at no point did participants view a map of the environment or a view from a top–down perspective.

City v1 was used for the first three or four scan sessions, and City v2 was used for the remaining two scan sessions. When switching cities, the participants were told “A war has broken out in your city. It is still the same city, but it looks different.” Each fMRI run was 11 min in length (330 TRs). Navigation on each run began from the same location with the participants standing in front of a horse statue at the center of the city and facing the South (Fig. 1b). Either 4, 5, or 6 scan runs were acquired in each session, depending on the time available, except for one session in one participant, for which only two scan runs were acquired. The total scan time in City v1 was 165–187 min (4,950–5,610 TRs), and the total scan time in City v2 was 88–99 min (2,640–2,970 TRs). Immediately prior to entering the scanner on each scan session, participants performed two search–deliver cycles of the taxi-cab task outside the scanner to refamiliarize them with the task.

MRI acquisition and preprocessing

Scanning was conducted at the Center for Functional Imaging at the University of Pennsylvania on a 3T Siemens Prisma scanner equipped with a 64-channel head coil. T1-weighted (T1w) images for anatomical localization were acquired in each scan session using 3D magnetization-prepared rapid acquisition gradient-echo pulse sequence (MPRAGE) protocol [repetition time (TR), 2,200 ms; echo time (TE), 4.67 ms; flip angle (FA), 8°; voxel size, 1 × 1 × 1 mm; matrix size, 192 × 256 × 160 mm]. T2*-weighted functional images sensitive to BOLD signals were acquired using a multiband gradient-echo echoplanar pulse sequence (TR, 2,000 ms; TE, 25 ms; FA, 70°; voxel size, 2 × 2 × 2 mm; multiband factor, 3; matrix size, 96 × 96 × 81 mm). Field mapping was performed after each MPRAGE scan with a dual-echo (TE, 4.12, 6.58 ms) gradient-echo sequence with pulse TR, 580 ms; FA, 45°; pixel bandwidth, 260; voxel size, 3.4 × 3.4 × 4.0 mm; and matrix size, 220 × 220 × 208 mm.

Data from the first nine participants were preprocessed using fMRIPrep 1.2.6 (Esteban et al., 2019; RRID:SCR_016216), which is based on Nipype 1.1.7 (Gorgolewski et al., 2011; RRID:SCR_002502). Data from the other six participants were preprocessed using fMRIPrep 20.2.1, which is based on Nipype 1.5.1 (RRID:SCR_002502). The parameters below are for one typical participant using fMRIPrep 1.2.6.

T1w images were corrected for intensity nonuniformity (INU) using N4BiasFieldCorrection (Tustison et al., 2010). A T1w-reference map was computed after registration of 5 T1w images (after INU-correction) using mri_robust_template (FreeSurfer 6.0.1; Reuter et al., 2010). The T1w reference was then skull-stripped using antsBrainExtraction.sh (ANTs 2.2.0), using OASIS as the target template. Brain surfaces were reconstructed using recon-all (FreeSurfer 6.0.1, RRID:SCR_001847; Dale et al., 1999). Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c (Fonov et al., 2009; RRID:SCR_008796) was performed through nonlinear registration with antsRegistration (ANTs 2.2.0, RRID:SCR_004757; Avants et al., 2008), using brain-extracted versions of both T1w volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) was performed on the brain-extracted T1w using fast (FSL 5.0.9, RRID:SCR_002823; Zhang et al., 2001).

For each of the BOLD runs, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. A deformation field to correct for susceptibility distortions was estimated based on a field map that was coregistered to the BOLD reference, using a custom workflow of fMRIPrep derived from the D. Greve's epidewarp.fsl script and further improvements of HCP Pipelines (Glasser et al., 2013). Based on the estimated susceptibility distortion, an unwarped BOLD reference was calculated for a more accurate coregistration with the anatomical reference. The BOLD reference was then coregistered to the T1w reference using bbregister (FreeSurfer) which implements boundary-based registration (Greve and Fischl, 2009). Coregistration was configured with 9 degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices and six corresponding rotation and translation parameters) were estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9; Jenkinson et al., 2002). BOLD runs were slice-time corrected using 3dTshift from AFNI 20160207 (Cox and Hyde, 1997; RRID:SCR_005927). The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. Functional data were smoothed with a Gaussian kernel [6 mm full-width at half-maximum (FWHM)] using 3dBlurToFWHM (AFNI). The six motion parameters were regressed out from the fMRIprep preprocessed functional data using ordinary least-squares regression (custom python scripts), and polynomial trends were up to the third order. Finally, functional data were z-scored within each run before model fitting.

fMRI statistical analyses

Overview

To test for fMRI responses corresponding to FD codes, we fit the BOLD response in each voxel with three encoding models. The first model accounted for responses related to motion energy in the stimulus display. The second model accounted for fMRI adaptation effects that are induced by changes in FD. The third model accounted for fMRI activation differences related to FD in each possible direction. Because we expected signals related to visual motion energy to be large and dominant, potentially masking the FD-related signals of interest, we fit the motion energy model to the data first. We then removed these signals before proceeding with the FD-related analyses. For the latter analyses, we first analyzed FD-adaptation effects alone using a nonlinear fitting approach. Then we analyzed FD-activation and FD-adaptation effects together in a linear model to account for potential common variance.

Motion energy

To determine visual motion energy signals, we preprocessed the video information that was presented to the participants in each scan run. The game engine recorded the visual display at 15 Hz, and we converted these game files to mp4 files. We then downsampled these videos from 1,024 × 768 to 128 × 96 and added black bands of pixels to the top and bottom to make each frame a 128 × 128 square. These downsized video files were converted into 128 × 128 × 3 × 44,220 (height × width × colors × frames) matrices in a unit8 format using MATLAB and converted from RGB values to L*A*B colorspace. The luminance channel values were retained for the encoding model analysis, and the color values were discarded.

Next we quantified the motion energy in the preprocessed videos using a set of temporal–spatial Gabor filters. Each filter was defined by one of the three temporal frequencies (cycles/time window; min tf, 1.33; max tf, 2.67), one of the five spatial frequencies (cycles/image; min sf, 1.5; max sf, 24), and one of the eight orientations of the Gabor wavelets. Filters spanned a square grid that covered each frame of the video images. Two Gabor filters with quadratic phases (0 and 90) were defined for each location, orientation, and spatial frequency, and the values from these two Gabor filters were squared and summed to produce one feature value for each quadratic pair of filters. The resulting motion energy feature space consisted of 2,139 Gabor feature channels. Prior to analysis, the time course of activation across these filters was downsampled from the visual display frame rate (67 Hz) to the fMRI sampling rate (0.5 Hz, TR = 2 s) by computing the mean feature channel values within TR.

Finally, we used an L2-regularized (ridge) regression to estimate model weights that map motion energy feature space onto the BOLD fMRI data for each voxel in each participant. This analysis was performed separately for City v1 and city v2. Feature channels were normalized by z-scoring the activation values within each channel across all time points for each scan run. Five finite impulse response (FIR) predictors at lags of 0, 2, 4, 6, 8 s were used to estimate the hemodynamic response for each feature channel at each voxel. A leave-one-run-out cross–validation procedure was used to choose regularization parameters (alphas). For each fold of the cross-validation, motion energy model weights were estimated using 20 possible alpha values (1–1,000, logarithmically spaced) and then used to predict responses on the withheld scan run. For each voxel, we chose the regularization parameter that yielded the highest mean performance accuracy across folds, as assessed by Pearson's correlation (r) between the predicted and actual fMRI responses. We then repeated the regression on the entire dataset for each city version (v1 or v2) using the best-fit regularization parameter. The residuals between the predicted time course and actual brain data were computed for each city and used for modeling of the FD-adaptation and FD-activation effects, described below.

FD adaptation

Adaptation effects, by definition, are effects engendered by changes (or absence of changes) in the quantity of interest over time. To examine these effects, we used a nonlinear model that accounted for variation in the BOLD fMRI signal in each voxel in terms of FD history preceding each timepoint. FD values (expressed as orientation in degrees) were recorded by the game engine at 5.127 Hz (3,384 samples per 11 min run). This vector was downsampled by a factor of 3 prior to model fitting.

The nonlinear model was under the control of parameters that adjusted the temporal integration of FD values and the gain of the effect of this behavioral history upon neural response. At each time point, the recent history of FD values was weighted by a decaying exponential function defined by a time constant (tau; bounded between 0.01 and 20 s). The modeled adaptation neural response at a time point was given as the (circular) difference between the current FD value and the recent history of FD values. The neural adaptation effect underwent multiplicative scaling under the control of a gain parameter, and then this neural prediction was convolved by a canonical HRF. A nonlinear search (MATLAB fmincon) was used to identify the parameters that minimized the L2 norm of the model fit the BOLD fMRI data at each voxel. The R2 value of the model fit at each voxel was retained for voxel selection and display.

We wished to account for the effects of adaptation in subsequent, linear models. To do so, we derived a linear subspace of observed FD-adaptation effects from the nonlinear fitting results. This was achieved by retaining the model fit of adaptation effects in each voxel. Each time-series vector was normalized to have unit variance. Then, we applied eigendecomposition on the covariance matrix of the predicted responses in each participant. We selected the two principal components corresponding to the highest eigenvalues (average variance explained in City v1 is 95.84% and City v2 is 95.74%). Finally, we transformed the original predicted adaptation responses onto the new two-dimensional subspace by multiplying it with the eigenvector matrix. This allowed us to reduce the dimensionality of the predicted voxel responses while preserving the most significant information of FD-related adaptation. These vectors were then used as covariates in subsequent linear analyses, described below.

Note that this model identifies fMRI responses related to changes in FD, but it is agnostic about the underlying representations that engender such effects. In particular, it cannot distinguish between visual representation (such as optic flow representations) that are modulated by FD changes and spatial representations of FD, especially in regions exhibiting a fast time course of temporal integration. Although motion energy-related signals were removed from the fMRI data before analyses, we cannot preclude the possibility that some visual signals related to optic flow might remain.

FD activation

To model FD activation in each voxel, we first converted the scalar FD value at each TR into an FD vector consisting of 45 separate FD channels. Each channel was defined by von Mises circular probability density function. The peaks of these functions were equally spaced around the 360° circle with a separation of 8°, and each function had a width of 8° defined as the FWHM. The value of each channel at each timepoint was the value of the scalar FD at each timepoint passed through the channel’s von Mises function. Thus, at each timepoint, each channel had a value that scaled with the angular distance between the scalar FD and the peak of the channel’s von Mises function, with the highest activation for the channel that peaked closest to the scalar value. All feature channels were normalized by z-scoring the values within each feature channel across all time points within each run in the fMRI dataset (City v1 or City v2). Five FIR predictors at lags of 0, 2, 4, 6, and 8 s were used to estimate the hemodynamic response for each feature channel at each voxel.

A joint model that includes the FD-activation feature space and the two PCs constituting the FD-adaptation feature space was estimated for each voxel in the two motion energy residual fMRI datasets (City v1 and City v2), using a modified version of ridge regression called banded ridge regression (Nunez-Elizalde et al., 2019). This procedure effectively decorrelates the features of the component models (in this case, FD activation and FD adaptation), by their covariance and the regularization parameters. Each voxel was assigned two different regularization parameters: one for FD-activation and one for FD-related adaptation. To find the optimal pair of regularization parameters for each voxel, a range of 11 alpha values was explored using the same leave-one-run-out cross–validation procedure used to fit the motion energy model. This allowed the joint model to effectively estimate the model weights for FD activation and FD adaptation, independently across all voxels in the brain. The pair of regularization parameters that yielded the highest mean performance accuracy was selected for each voxel. These regularization parameters pairs were then used to estimate the model weights for each voxel.

To validate the FD-activation and FD-adaptation models for each city version, we used the parameter weights for each model to predict responses in the other city version (cross-city validation) and calculated Pearson's correlation (r) between the predicted responses and the actual fMRI time course in the validation city version dataset. The twofold cross-city model performance accuracies were then averaged for each voxel in each participant. To determine the significance of model prediction accuracy, we conducted a group-level statistical analysis using AFNI. First, we spatially normalized individual subject-level encoding model prediction maps to the MNI152 template. Next, we applied spatial smoothing with a Gaussian kernel (10 mm FWHM) and performed a voxelwise one–sample t test to determine whether the model prediction r value at each voxel was significantly greater than zero at the group level. To control for multiple comparisons, we applied false discovery rate (FDR) correction at q < 0.005. We then visualized the significant voxels on the MNI152 template, as shown in Figure 3b.

Generalization of direction code across location and task phase

We used cross-sector validation to test whether the observed direction code generalized across different locations within the environment. The cities were designed such that a single choke point divided them into separate sectors, which we designated “North” and “South.” Typically, participants would spend a substantial portion of time in one sector before passing into the other, making them hemodynamically separable from each other. We thus grouped the fMRI timepoints for each city version based on the corresponding sectors and reanalyzed the fMRI data accordingly, using the banded regression procedure described above. The model fits were assessed through cross-sector cross–city validations (fourfold, City v1 North vs City v2 North; City v1 North vs City v2 South; City v1 South vs City v2 North; and City v1 South vs City v2 South), and performance for same-sector validations (North/North and South/South) was compared with performance for different-sector validations (North/South and South/North).

Similarly, we used cross-task–phase validation to test whether the directional code generalized across different phases of the task in which participants pursued different behavioral goals (searching for passenger vs delivering passengers to stores). For this analysis, we grouped fMRI datapoints according to phase of the experiment (search vs deliver) and reanalyzed the data accordingly. The model fits were assessed through cross-phase cross–city validations (fourfold, City v1 Search vs city v2 Search; City v1 Search vs city v2 Deliver; City v1 Deliver vs City v2 Search; and City v1 Deliver vs City v2 Deliver), and performance for the same-phase validations (Search/Search, Deliver/Deliver) was compared with performance for different-phase validations (Search/Deliver and Deliver/Search).

Regions of interest

In addition to whole-brain analyses, we also performed analyses on several regions of interest (ROIs). These included brain regions previously implicated in processing of visual or spatial information relevant to spatial navigation (hippocampus, ERC, thalamus, parahippocampal place area, RSC, occipital place area), along with the early visual cortex (EVC) as a comparison region. The PPA, OPA, and RSC and EVC were defined based on visual category contrasts (scene > object for PPA, OPA, and RSC; scrambled object > intact object for EVC) obtained from a group of 42 functional localizer participants previously run in our lab. The ROI for the ERC was defined from each participant’s anatomical parcellation as obtained from FreeSurfer. The bilateral hippocampus and thalamus were anatomically defined using FSL’s automatic subcortical segmentation protocol (FIRST). We also defined RSC (FD) and SPL (FD) ROIs in each participant to explore the nature of the FD code in the regions where evidence for this code was the strongest. These were clusters of ∼200 voxels contiguous with the RSC or SPL peaks in each hemisphere showing the strongest FD-activation effect. We calculated standard temporal signal-to-noise-ratio (tSNR) by dividing the mean of the preprocessed data from the first run by its standard deviation for each ROI in each participant. The averaged tSNR across participants in each ROI are as follows: PPA, 49.07; RSC, 56.59; OPA, 74.57; Hipp, 44.98; ERC, 29.08; Thal, 43.15; EVC, 49.27; RSC (FD), 63.63; and PL (FD), 42.31.

Analysis of voxelwise tuning

To better understand how FD information was represented in RSC (FD) and SPL (FD), we examined the distribution of FD-activation model weights across the ensemble of voxels. For each voxel, FD-activation model weights were averaged across City v1 and v2 and then normalized across the 45 FD channels (with the minimum weight set to 0 and the maximum weight set to 1). For each ROI, voxels were pooled across participants and sorted according to the FD channel exhibiting the peak (or trough) parameter weight.

To test whether the distribution of FD channels exhibiting the peak (or trough) parameter weights across voxels differed from what would be expected by chance, we performed bootstrap resampling (k = 1,000) of participants and cities (v1/v2). The chance baseline was determined by the same bootstrap analysis but randomly shuffling parameter weights circularly across FD channels (k = 1,000). To preserve any correlational structure in parameter weights across voxels, each voxel's parameter weights were shuffled by the same angular offset within each participant and ROI. Statistical significance was assessed by computing the probability that the observed mean across bootstrap resamples differed from the circularly shuffled bootstrap distribution, separately for each FD channel. We also performed the same analyses on model weights from an FD-activation model that was fit to the fMRI data without including the two PCs from the FD-adaptation model; this obtained qualitatively identical results. Post hoc analyses of trough effects were performed by defining these effects in each participant as the difference between the average model weights for the six trough channels (the three FD channels surrounding “North” and the 3 FD channels surrounding “South”) and the average model weight for all other filters. Note that these statistical analyses examine FD tuning across the group of participants and do not preclude the possibility that individual participants might have idiosyncratic tunings that are not reflected in the group data.

To further evaluate the dimensional structure of the FD-activation model weights across voxels without relying solely on the maximum or minimum of each voxel, we used principal component analysis (PCA) and cross-projection techniques. FD-activation model weights for each city were z-scored across FD channels independently for each voxel and then mean-centered across voxels for each FD channel, within each ROI. We then performed PCA using singular value decomposition (SVD) on the City v1 model weights for each participant and determined the number of principal components required to explain 90% of variance in these weights across voxels. These components were used to project and reconstruct the City v2 model weights. We computed the proportion of variance in City v2 that was explained by this projection to assess the cross-city reliability of the reduced-dimensional representations. To test statistical significance, we generated a null distribution by randomly permuting the City v2 FD-activation model weights at the voxel level (1,000 permutations). The same projection and reconstruction procedure was applied to each permuted dataset, and the resulting variance explained was compared with the real (unpermuted) data across participants using a sign test. Similar results were obtained when using an alternative null model in which a random orthogonal rotation was applied to the projected data before reconstruction—preserving variance but disrupting the alignment with the original component axes.

Results

Participants performed a “taxi-cab” task in two visually distinct versions of a virtual city

Participants were scanned with fMRI while they performed a taxi-cab task in two versions of a virtual city (City v1 and City v2). In this task, participants alternated between searching for passengers and delivering them to stores that served as goal locations. Visual examination of the trajectories taken by the participants indicated that they covered most of the environment when searching for passengers and took efficient routes when delivering passengers to their destinations (Fig. 1a). The two versions of the city were spatially identical but had different visual features (Fig. 1b).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Active navigation in virtual reality (VR) cities. a, The map shows the layout of the VR city; stars indicate the 12 target locations. The complete movement trajectory for one participant is plotted: green dots indicate the average location of the participant at each TR and red lines indicate the average FD. b, Ground-level views showing the same location in the two versions of the city. The layout of the walls, buildings, and walkways is the same, but surface textures and colors are different. These two images show the initial view for each scan run, which was at center of the city facing to the “South” (i.e., toward the bottom of the map).

On average, participants found 4.08 passengers (SD, 0.78) in each scan run and delivered to 7.75 stores (SD, 1.52). The mean time to find a passenger was 98.73 s (SD, 24.11) and the mean time to deliver to a store was 28.20 s (SD, 3.88). These values did not change over the course of the experiment (F(4,70) = 0.4845; p = 0.74, searching for passengers; F(4,70) = 1.06; p = 0.3838, delivering to stores; Fig. 2a) and did not significantly differ between City v1 and City v2 during store delivery (t(14) = 1.7793; p = 0.0969; Fig. 2b) but did significantly differ during passenger search (t(14) = 2.1529; p = 0.0493; Fig. 2b). FDs tended to remain stable over the course of a 2 s MR scan acquisition (Fig. 2c): on 67% of TRs, they deviated <8°, a magnitude corresponding to the bins of our fMRI analysis model. Examination of the histogram of FDs (Fig. 2c, inset) indicated that all directions were sampled to some extent, but the cardinal directions (North, East, South, West) were sampled more, consistent with the rectilinear organization of the environment.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Navigation behavior. a, Average duration of each trial phase, organized by scan session. b, Average duration of each trial phase, organized by city version. c, Directional sampling during navigation. Within each TR, participants tended to face in a single direction for the entirety of that TR. Data were binned according to the angular distance to a predominant direction (theta) within each TR (8° bin). Across the entire experiment, participants faced all directions, but the cardinal directions (North, East, South, West) were sampled more, due to the rectilinear organization of the environment. Error bars indicate ±1 standard error of the mean.

Motion energy-related signals were removed from the BOLD response

Our primary goal was to identify neural representations of FD by analysis of the fMRI BOLD signal. For any dynamic visual stimulus, motion energy will typically account for a large portion of the BOLD response. Visual motion energy is a particularly problematic confound when investigating FD codes, given the fact that changes in FD will invariably give rise to motion energy across the visual field. Thus, prior to analyzing the fMRI data for FD-related signals, we removed responses related to visual motion energy.

To do this, we represented the motion energy of the visual display at every timepoint using a set of temporal–spatial Gabor filters. We then used ridge regression to fit the loadings on this motion energy feature space to the voxelwise BOLD response in each version of the city (City v1, City v2). Results for each city version were cross-validated on the independent data from the other city version. We found that the motion energy model predicted BOLD signals across a wide swath of the cortex (Fig. S2). We computed the residual between the predicted time course of the motion energy model and actual brain data for each city and z-scored within each voxel across all time points for each scan run. The analyses of FD adaptation and FD activation reported below were performed on these residuals.

fMRI adaptation for FD was observed in RSC and SPL

We next analyzed the fMRI response in each voxel for signals relating to FD-related adaptation; that is, modulation of the fMRI signal based on the stability of FD over time, under the specific hypothesis that the BOLD signal will be reduced if the same FD is maintained. These effects are of interest because regions that represent FD are likely to exhibit adaptation when the same FD is maintained over time. In addition, because FD is unequally sampled in our environment due to the rectilinear organization of the passageways, it is essential to account for FD-adaptation effects before considering FD-activation effects.

We modeled the FD-adaptation effect at each timepoint as the angular distance between the current FD and the integrated sum of preceding FDs. Preceding FDs contributed to the integrated sum with an exponential decay, such that more recent FDs had a stronger contribution. Because the parameters for the decay function (tau) were not known in advance and were expected to vary across voxels, we fit the data using a nonlinear regression method that allowed us to search over these variables to find the best fit. Parameter search was performed on the data from each version of the city separately and then cross-validated on the independent data from the other version of the city.

Results are shown in Figure 3. The two areas with the strongest FD-adaptation effect were in the posterior–medial cortex (overlapping with the RSC) and the SPL. Notably, both of these regions have previously been implicated in the coding of FD (Marchette et al., 2014; Dutriaux et al., 2024) and scene perception (Epstein and Baker, 2019; Kennedy et al., 2024) and are functionally connected to regions that process optic flow related to self-motion (Sherrill et al., 2015). We also observed FD adaptation in the visual cortex, which was strongest in regions below the calcarine sulcus representing the upper visual field. This might be attributable to visual adaptation engendered by the stability of the visual input in the upper visual field when the navigator continues to face the same direction.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

FD-related adaptation. a, Nonlinear fitting was used to estimate voxelwise parameter weights for an FD-adaptation model in City v1 and v2 data. Prediction accuracy was computed as the correlation (r) between the responses predicted by the model in one city version and BOLD activity in the other city version (cross-city validation). These values were then averaged across the two city versions and across all participants and plotted on a partially inflated brain (top) and flattened cortical surface (bottom). b, Thresholded version of a (q > 0.005; FDR corrected). RSC, retrosplenial complex; PPA, parahippocampal place area; OPA, occipital place area; V1/V2, primary and secondary visual cortices.

The time constants (tau) of the exponential integration from the adaptation model are plotted in Figure S3 for six ROIs that have been previously implicated in spatial processing during navigation (RSC, PPA, OPA, ERC, hippocampus, and thalamus) along with an EVC comparison region. We observed sensible progression across regions within the visual hierarchy, with short integration times in EVC and OPA, intermediate integration times in PPA, and longest integration times in ERC and the hippocampus. Integration times in RSC and SPL were also short, suggesting that at least some of the FD-adaptation effects in this region might be attributed to visual signals engendered by FD changes, rather than FD-adaptation per se. Note, however, that such an alternative explanation for the FD-adaptation effect would not undercut the interpretation of the FD-activation effect as evidence for FD coding.

FD signals were observed in voxelwise fMRI activity in RSC and SPL

We then turned to an analysis of FD-related activity, which was our primary analysis of interest. To this end, we employed an encoding model approach in which the FD at each timepoint was represented as a response pattern across 45 circular Gaussian filters (Fig. 4). To account for potential FD-related adaptation effects that might be confounded with FD-related activity, we simultaneously modeled FD adaptation using two regressors derived from PCA on the voxelwise adaptation effects obtained in the previously described nonlinear regression. We used banded ridge regression to fit these two feature sets (FD activation, FD adaptation) to the voxelwise BOLD response in each version of the city and then cross-validated the results on the independent data from the other version of the city.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Voxelwise encoding model logic. a, FD activation feature space. We modeled FD using von Mises circular probability density function. At each TR, the observed FD was converted into a FD vector consisting of 45 separate channels. We generated a time course for all FD features. b, A joint linear regression (banded ridge regression) was fit on two feature spaces simultaneously to predict brain activities (y). FD-activation and FD-adaptation feature spaces were concatenated into a large feature matrix (X). A weight vector b was estimated. L2-regularization parameters were estimated for each feature space separately in each voxel. c, Cross-validation procedure. We separated the data into a training dataset (e.g., City v1) and a test dataset (e.g., City v2). First, we estimated model weights (b) in each voxel in the training dataset only using banded ridge regression. We then used FD activation model weights on their own to predict each voxel’s brain activity in the testing dataset. Finally, we evaluated model prediction accuracy via Pearson's correlation between the prediction vector and the recorded brain activity. Pearson's R was used to quantify model prediction accuracy.

Results for the FD-activation model are shown in Figure 5. There were two regions in each hemisphere exhibiting the strongest FD-activation effects: the medial parietal lobe, overlapping with RSC, and SPL. Results for the FD-adaptation model were very similar to those observed in the previously described nonlinear model (data not shown).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

FD-related activation. a, Banded ridge regression was used to estimate voxelwise parameter weights for a combined FD-adaptation and FD-activation model in City v1 and v2 data. Prediction accuracy was computed for the FD-activation model as the correlation (r) between the responses predicted by the model in one city version and BOLD activity in the other city version (cross-city validation). These values were then averaged across the two city versions and across all participants and plotted on a partially inflated brain (top) and flattened cortical surface (bottom). b, Thresholded version of a (q > 0.005, FDR corrected). ROIs the same as Figure 3. Crosses indicate peaks of the FD-activation effect in SPL (red) and RSC (blue), which were used to define SPL (FD) and RSC (FD) ROIs for further analyses (Fig. 6c).

To test for consistency with previous findings, we also examined the FD-adaptation and FD-activation effects within individual ROIs (Fig. 6). We found significant FD adaptation in all seven regions (all ts(14) > 7.44; ps < 0.001, FDR corrected across ROIs) and significant FD activation in RSC, PPA, OPA, ERC, thalamus. and EVC (all ts(14) > 2.62, ps < 0.05; FDR corrected across ROIs) but not the hippocampus (t(14) = 1.665; p = 0.1182). Figure 5c shows these same effects within RSC and SPL ROIs defined based on the RSC or SPL peaks in each participant showing the strongest FD-activation effect; these are used as ROIs in the analyses below.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

FD-related coding in spatial system ROIs. a, Significant FD-adaptation effects were found in all ROIs, including parahippocampal place area (PPA), RSC, occipital place area (OPA), hippocampus (Hipp), ERC, thalamus (Thal), and EVC. b, Significant FD-activation effects were found in all ROIs except the hippocampus. c, RSC (FD) and SPL (FD) ROIs were defined in each hemisphere as the ∼200 voxel showing the strongest FD-activation effect contiguous with the RSC (blue) or SPL (red) peaks. As expected, both FD-adaptation and FD-activation were highly significant in these regions. *p < 0.05; ***p < 0.001 (FDR corrected for multiple comparison across ROIs). Error bars indicate ±1 standard error of the mean.

The direction code was stable across locations within the city and across phases of the experimental task

The above results suggest that RSC and SPL support representations of FD that could provide a “neural compass” during spatial navigation. For such a compass to be useful, it would need to be stable. The results above already provide evidence of stability across perceptual changes, given that the FD-activation model was trained and tested across different versions of the city that had the same spatial structure but different perceptual features. The results above also demonstrate stability over time, given that the City v1 and City v2 data were acquired in different scan sessions on different days. Here we test whether the FD-activation code also exhibits stability across locations within the environment and across different phases of the navigational task.

To examine stability across locations within the environment, we took advantage of the fact that our city was divisible into two subregions at a single choke point (Fig. 7a). We refer to these subregions as the “North” and “South” sectors of the environment. Using the same procedure as above, we trained the FD-activation model on each sector and tested it on data from the same or opposite sector in the other city. As before, we also included the FD-adaptation model in a banded ridge regression, and all analyses were performed on the residual data after regressing out BOLD signals related to motion energy. Figure 7b shows results for the RSC and SPL regions that showed the largest FD-activation effect in the previous, whole-environment analysis (i.e., the ROIs shown in Fig. 5c). The FD-activation effect was significant in both regions when analyzed within the same sector of the environment (RSC t(14) = 3.96; p = 0.0014; SPL t(14) = 5.64; p = 0.0001) and also—crucially—when analyzed across different sectors (RSC t(14) = 2.315; p = 0.0363; SPL t(14) = 3.0808; p = 0.0081). These results demonstrate that the FD-activation effect in these regions reflects a reliable directional code that generalizes across the entire environment.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

FD representations are stable across locations in RSC and SPL. a, A single choke point (red dot) divided the city into two sectors. The blue area indicates the “North” sector, and the orange area indicates the “South” sector. b, FD-activation models were trained on each sector separately and then tested on data from the same sector (gray bars) or opposite sector (white bars). Training and testing were performed across city versions (cross-city validation). Significant FD-activation effects were found for both matched and mismatched sectors, demonstrating that the FD code generalizes across locations. In RSC (FD), there was a reduction of the FD-activation effect for mismatched sectors, whereas no reduction was found in SPL(FD). *p < 0.05; **p < 0.01; ***p < 0.001. Error bars indicate ±1 standard error of the mean.

To examine stability across different phases of the navigation task, we trained the FD-activation model separately on timepoints in which subjects were searching for a passenger and timepoints in which they were delivering a passenger to a destination. We then tested whether these two FD-activation models predicted BOLD activity in the corresponding or opposite task phase in the other city. Figure 8 shows results for the RSC and SPL regions. The FD-activation effect was significant in both regions when analyzed within the same phase of the taxi-cab task, for both the search phase (RSC t(14) = 3.9943; p = 0.0013; SPL t(14) = 4.6534; p = 0.0004), and the deliver phase (RSC t(14) = 3.0397; p = 0.0088; SPL t(14) = 3.9838; p = 0.0014). Crucially, it was also significant when analyzed across the different phases of the task (search–deliver; RSC t(14) = 4.4421; p = 0.0006; SPL t(14) = 2.7404; p = 0.0159). These results demonstrate the stability of the directional code across task phases with different motivational goals. Moreover, they indicate that the FD code is unlikely to be attributable to coding of goal (store) locations, since these are only relevant in the deliver phase.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

FD representations are stable across task phases in RSC and SPL. FD-activation models were trained separately on search and deliver phases of the taxi-cab task and then tested on data from the same phase or opposite phase. Training and testing were performed across city versions (cross-city validation). Significant FD-activation effects were observed for both matched and mismatched phases, demonstrating that the FD code generalizes across task phases with different behavioral goals. *p < 0.05; **p < 0.01; ***p < 0.001. Error bars indicate ±1 standard error of the mean.

Tuning profiles in RSC and SPL indicated that FD is coded relative to the principal axis of the environment

The analyses above indicate that RSC and SPL mediate FD codes that are stable across different versions of the virtual city, different sectors of the city, and different phases of the navigation task. We thus proceeded to investigate how FD is encoded in the ensemble of estimated model weights in RSC and SPL. To do this, we normalized the weights for each voxel and organized the voxels within RSC and SPL according to the angular direction of their peaks and troughs (Fig. 9a). We saw an interesting and unexpected difference in the distribution of the peaks and troughs. The peaks were distributed across all possible FDs. This is an important finding, because it indicates that many possible directions are represented as one would expect from a neural compass, not just a small number. This broad distribution of peak weights cannot be attributed to ordering of noise, because we showed above that the weights for these voxels could be used to predict fMRI responses in held-out data (i.e., cross-city validation), thus indicating that they are meaningful. The troughs, on the other hand, showed unequal distribution across FDs. Although all directions were represented to some extent, there was overrepresentation of “North” (270°) and “South” (90°)—corresponding to the principal axis of the city—in both RSC and SPL, as well as “East” (0°) in SPL.

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

Analysis of model weights suggests coding of FD relative to the principal axis of the environment. a, Ensemble of FD-activation voxelwise model weights in RSC (FD) and SPL (FD). For each voxel, FD-activation model weights were averaged across City v1 and v2 and then normalized across the 45 FD channels. Voxels were then sorted according to the FD channel exhibiting the maximum (peak) or minimum (trough) parameter weight. Directions are equally represented in the peaks but unequally represented in the troughs. b, Probability density function of maximum and minimum responses, constructed as a histogram of the voxelwise peaks and troughs. Peaks are evenly distributed across FDs; troughs show overrepresentation of North and South in RSC, North, South, and East in SPL. Top and bottom colored curves are ±1 standard deviation of the observed data (bootstrap resampling). Gray range is ±1 standard deviation around chance baseline determined by circular shuffling of parameter weights across participants. Labeled peaks are significant outliers relative to the shuffled data (RSC, South p = 0.03; North p < 0.001; SPL, East p = 0.01; South = 0.006; North = 0.003).

Analysis of the histograms for the peaks and the troughs (Fig. 9b) shows clearly that troughs fall along these directions more often than would be expected by chance, whereas peaks are more equally distributed. The trough effects at “North” and “South” were significant across participants in RSC (t(14) = 9.01; p < 0.0001) and SPL (t(14) = 4.14; p < 0.001). Moreover, the trough effects were stronger in voxels that had more reliable FD-activation effects (i.e., voxels whose model weights were more predictive of fMRI responses across cities) in RSC (average r = 0.19; t(14) = 3.07; p < 0.01) but not SPL (average r = 0.05; t(14) = 0.68; p = 0.5). This combined pattern of peaks and troughs demonstrates that the FD code in RSC and SPL is sensitive to the global structure (e.g., principal axis) of the environment. Indeed, it suggests the intriguing possibility that FD may be represented in these regions in terms of the angular deviation from the principal axis.

To further examine the dimensionality of heading representations in RSC and SPL, we performed PCA on the voxelwise model weights using SVD. For each participant, we extracted the pattern of model weights across FDs in City v1 and calculated the number of components needed to explain 90% of the variance in these weights. On average, we found that 18.6 components were required in RSC and 13.2 in SPL (Fig. 10). To evaluate the reliability of these reduced-dimensional representations, we used the components derived from City v1 to project and reconstruct the data from City v2. Across participants, the proportion of variance in City v2 explained by this reconstruction was significantly higher than that obtained from a shuffle control in both RSC and SPL (sign-test; RSC, p < 0.0001; SPL, p < 0.001). These findings demonstrate that heading representations in both regions span a space larger than just a few FDs, with multiple components required to capture the distributed voxelwise response patterns across FDs.

Figure 10.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 10.

PCA of model weights indicates that a broad range of FDs is encoded. The plot shows the cumulative variance explained in City v1 with increasing numbers of principal components of the voxelwise model weights. Individual subjects are plotted as light gray curves and the average across subjects as red (RSC) and blue (SPL) curves. Dashed lines indicate 90% variance explained, which requires 18.6 components in RSC and 13.2 components in SPL. Insets show the variance explained in City v2 by these number of City v1 components, confirming reliability of the components across cities.

Discussion

The primary goal of this study was to identify FD codes in the human brain during naturalistic, dynamic navigation. To this end, we used fMRI to record brain activity while participants performed a realistic “taxi-cab” game in a virtual city over several experimental sessions. We identified two regions of the brain—RSC and SPL—that represented FD. These FD codes were evidenced in both adaptation (reduced fMRI activity when maintaining the same FD over time) and activation (fMRI activity levels that varied as a function of FD). Crucially, the FD-related activation codes generalized across two different versions of the city that had the same spatial layout but different visual appearances. They also generalized across spatially separated sectors of the city (North vs South) and across temporally separated phases of the task (search for passenger vs deliver passenger to destination), indicating that they represent FD independent of location and behavioral goals. These findings demonstrate the critical roles of RSC and SPL in representing directional signals during active spatial navigation. RSC and SPL may support a neural “compass” that remains stable as participants navigate from one part of the environment to another, akin to the function of HD cells in rodents.

These findings build on previous studies examining FD (or “heading”) codes in the human brain but go beyond them in crucial ways. Several previous studies have used trial-based designs with adaptation or pattern classification methods to examine FD. In some of these studies, participants recalled FDs from memory when cued by static verbal or visual stimuli (Baumann and Mattingley, 2010; Marchette et al., 2014; Chadwick et al., 2015; Shine et al., 2016; Vass and Epstein, 2017; Dutriaux et al., 2024). In others, participants viewed movie clips depicting first-person movements through a virtual environment and reported the corresponding FD (Kim et al., 2017; Kim and Maguire, 2019). These studies found heading effects in RSC and SPL, consistent with the present results (and also in ERC, thalamus, and subiculum). However, these previous studies did not examine FD codes while participants actively navigated through an extended naturalistic environment, as we do here. Our findings show that the FD codes in RSC and SPL previously identified during constrained trial-based tasks are also engaged during naturalistic navigation.

To interrogate spatial codes during dynamic navigation, we used a voxelwise encoding model (Naselaris et al., 2011). We represented FD at each timepoint as activity in a set of circular Gaussian filters, each tuned to a specific direction, and we used regularized linear regression to relate these filter activations to BOLD responses in each voxel. This approach has been previously used to relate BOLD activity to visual, auditory, linguistic, and semantic features in other complex and realistic stimuli, such as visual movies and audiobooks (Nishimoto et al., 2011; Huth et al., 2016). It was also used in a previous study to examine FD codes while participants dynamically navigated within a single VR arena (Nau et al., 2020). This important precursor study found substantially different results. Activation codes corresponding to FD were observed in RSC but also in several ventral occipital and medial temporal lobe brain regions, and RSC and SPL did not emerge as clear hotspots. Some of the difference in anatomical distribution may be due to the fact that the previous study used a restricted set of fMRI slices that did not cover SPL. In addition, some of the FD-coding mechanisms in RSC and SPL might have been less engaged in the previous study because navigation was performed within a single chamber; thus, participants were not required to remain oriented within an extended space that included several distinct subspaces as they were here (Epstein et al., 2007). Indeed, because of the spatial restriction of the environment, the previous study could not test whether the heading code was stable across perceptual changes and locations. By showing that FD codes in RSC and SPL generalize across perceptually distinct versions of the same environment, locations within the environment, and task phases, we provide stronger evidence that the observed responses relate to FD per se.

The use of an encoding model approach also allowed us to find important new information about how FD is represented in these regions. Specifically, when we examined the distribution of model weights across voxels, we found that the peak voxelwise responses were equally distributed across FDs. This is what one would expect from a “neural compass” in which the full gamut of possible FDs is represented. In contrast, the minimum voxelwise responses were not equally distributed but were more commonly found for the “North” and “South” directions corresponding to the principal axis of the city. This finding is unlikely to relate to local geometry or boundaries, as these have a fourfold symmetry related to the 90° offsets of the corridors along the four cardinal directions. Rather, this result suggests that FD is coded relative to the global layout of the city, which has a strong North–South organization along the main corridor (an organization that may have been reinforced by the fact the initial FD in each scan session was South down the main corridor). Notably, a recent study using a trial-based design found evidence that FD codes in RSC and SPL might be defined by angular displacement from a reference direction (Dutriaux et al., 2024), and we previously found evidence for preferential RSC coding of compass north when recalling imagined views from a familiar campus (Vass and Epstein, 2017) consistent with previous behavioral results (Frankenstein et al., 2012). Thus, we interpret our data as evidence that RSC and SPL represent FD as deviations from a reference direction, which in this case is defined by the global structure of the environment.

Although we observed the strongest FD effects in RSC and SPL, we also observed significant effects in several other ROIs, including ERC, PPA, OPA, and EVC, as well in other voxels throughout the brain. The finding of FD codes in ERC is consistent with previous literature using trial-based designs; however, it is notable that—though significant—the effects in this region were not particularly strong in the current experiment. This may suggest that FD representations in ERC are more strongly engaged in situations where FD must be newly recovered on each experimental trial, rather than situations when it must be maintained over a prolonged time period during continuous navigation. Alternatively, the absence of a stronger ERC code may be due to fMRI signal dropout in this region. More broadly, a possible interpretation of our results is that FD codes are widely distributed across the brain, with some regions (RSC and SPL) being the primary but not exclusive nodes for this information. However, we note that this interpretation would be somewhat at odds with the neuropsychology literature, which suggests that deficits of the directional sense are related to lesions in circumscribed regions of the brain, specifically RSC, which is one of the two hotspots revealed in the current study (Takahashi et al., 1997; Aguirre, 1999; Maguire, 2001; Ino et al., 2007; Kawakami et al., 2024).

We also observed some evidence for differences between RSC and SPL spatial codes. Both regions represented FD in a manner that was consistent across spatially distinct sectors of the city (North vs South) and temporally distinct phases of the taxi-cab task (search vs deliver). However, cross-sector prediction accuracy was reduced in RSC compared with within-sector accuracy, whereas no similar reduction was observed in SPL. Conversely, cross-phase prediction accuracy was reduced in SPL compared with within-phase accuracy, whereas there was no similar reduction in RSC. These results indicate that FD codes in RSC are more impacted by changes in location, whereas FD codes in SPL are more impacted by changes in behavioral task.

In sum, we have shown that RSC and SPL represent FD during naturalistic virtual navigation in a manner that is consistent with a compass-like representation, a cognitive construct that is central to many wayfinding strategies. Beyond these specific findings, the success of the experimental approach indicates that these methods may be widely useful for examining spatial codes during virtual navigation, thus expanding on previous fMRI work that has identified brain systems that are engaged during virtual navigation (Aguirre et al., 1996; Maguire et al., 1998; Spiers and Maguire, 2006) or show activity that scales with univariate spatial quantities (Spiers and Maguire, 2007; Howard et al., 2014). Although we focus here on FD codes, there is nothing intrinsic to the experimental design that prevents us from looking at other spatial representations. Indeed, a strength of the encoding model approach is that one does need to set up distinct experimental paradigms targeted toward different spatial codes. Thus, the current approach holds promise for bridging the gap between the granular data obtainable from single-cell recordings in nonhuman animals and the broad systems perspective provided by whole-brain fMRI in humans, as well as potentially revealing navigational mechanisms that may be unique to humans.

Footnotes

  • This work was supported by National Institutes of Health Grants R01EY022350 and R21EY027047 to R.A.E.

  • The authors declare no competing financial interests.

  • This paper contains supplemental material available at: https://doi.org/10.1523/JNEUROSCI.1765-24.2025

  • Correspondence should be addressed to Zhengang Lu at zhengang.lu{at}nyu.edu or Russell A. Epstein at epstein{at}psych.upenn.edu.

SfN exclusive license.

References

  1. ↵
    1. Aguirre GK
    (1999) Topographical disorientation: a synthesis and taxonomy. Brain 122:1613–1628. https://doi.org/10.1093/brain/122.9.1613
    OpenUrlCrossRefPubMed
  2. ↵
    1. Aguirre GK,
    2. Detre JA,
    3. Alsop DC,
    4. D’Esposito M
    (1996) The parahippocampus subserves topographical learning in man. Cereb Cortex 6:823–829. https://doi.org/10.1093/cercor/6.6.823
    OpenUrlCrossRefPubMed
  3. ↵
    1. Avants BB,
    2. Epstein CL,
    3. Grossman M,
    4. Gee JC
    (2008) Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal 12:26–41. https://doi.org/10.1016/j.media.2007.06.004
    OpenUrlCrossRefPubMed
  4. ↵
    1. Baumann O,
    2. Mattingley JB
    (2010) Medial parietal cortex encodes perceived heading direction in humans. J Neurosci 30:12897–12901. https://doi.org/10.1523/JNEUROSCI.3077-10.2010
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Bicanski A,
    2. Burgess N
    (2020) Neuronal vector coding in spatial cognition. Nat Rev Neurosci 21:453–470. https://doi.org/10.1038/s41583-020-0336-9
    OpenUrlCrossRefPubMed
  6. ↵
    1. Burgess N
    (2014) The 2014 Nobel Prize in physiology or medicine: a spatial model for cognitive neuroscience. Neuron 84:1120–1125. https://doi.org/10.1016/j.neuron.2014.12.009
    OpenUrlCrossRefPubMed
  7. ↵
    1. Chadwick MJ,
    2. Jolly AEJ,
    3. Amos DP,
    4. Hassabis D,
    5. Spiers HJ
    (2015) A goal direction signal in the human entorhinal/subicular region. Curr Biol 25:87–92. https://doi.org/10.1016/j.cub.2014.11.001
    OpenUrlCrossRefPubMed
  8. ↵
    1. Cox RW,
    2. Hyde JS
    (1997) Software tools for analysis and visualization of fMRI data. NMR Biomed 10:171–178. https://doi.org/10.1002/(SICI)1099-1492(199706/08)10:4/5<171::AID-NBM453>3.0.CO;2-L
    OpenUrlCrossRefPubMed
  9. ↵
    1. Dale AM,
    2. Fischl B,
    3. Sereno MI
    (1999) Cortical surface-based analysis: I. Segmentation and surface reconstruction. Neuroimage 9:179–194. https://doi.org/10.1006/nimg.1998.0395
    OpenUrlCrossRefPubMed
  10. ↵
    1. Dutriaux L,
    2. Xu Y,
    3. Sartorato N,
    4. Lhuillier S,
    5. Bottini R
    (2024) Disentangling reference frames in the neural compass. Imag Neurosci 2:1–18. https://doi.org/10.1162/imag_a_00149
    OpenUrl
  11. ↵
    1. Epstein RA,
    2. Baker CI
    (2019) Scene perception in the human brain. Annu Rev Vis Sci 5:373–397. https://doi.org/10.1146/annurev-vision-091718-014809
    OpenUrlPubMed
  12. ↵
    1. Epstein RA,
    2. Parker WE,
    3. Feiler AM
    (2007) Where am I now? Distinct roles for parahippocampal and retrosplenial cortices in place recognition. J Neurosci 27:6141–6149. https://doi.org/10.1523/JNEUROSCI.0799-07.2007
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Esteban O, et al.
    (2019) fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat Methods 16:111–116. https://doi.org/10.1038/s41592-018-0235-4
    OpenUrlCrossRefPubMed
  14. ↵
    1. Fonov V,
    2. Evans A,
    3. McKinstry R,
    4. Almli C,
    5. Collins D
    (2009) Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. Neuroimage 47:S102. https://doi.org/10.1016/S1053-8119(09)70884-5
    OpenUrlCrossRef
  15. ↵
    1. Frankenstein J,
    2. Mohler BJ,
    3. Bülthoff HH,
    4. Meilinger T
    (2012) Is the map in our head oriented north? Psychol Sci 23:120–125. https://doi.org/10.1177/0956797611429467
    OpenUrlCrossRefPubMed
  16. ↵
    1. Gallistel CR
    (1990) The organization of learning. Cambridge, MA, USA: The MIT Press.
  17. ↵
    1. Glasser MF, et al.
    (2013) The minimal preprocessing pipelines for the human connectome project. Neuroimage 80:105–124. https://doi.org/10.1016/j.neuroimage.2013.04.127
    OpenUrlCrossRefPubMed
  18. ↵
    1. Gorgolewski K,
    2. Burns CD,
    3. Madison C,
    4. Clark D,
    5. Halchenko YO,
    6. Waskom ML,
    7. Ghosh SS
    (2011) Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinform 5:1–15. https://doi.org/10.3389/fninf.2011.00013
    OpenUrlPubMed
  19. ↵
    1. Greve DN,
    2. Fischl B
    (2009) Accurate and robust brain image alignment using boundary-based registration. Neuroimage 48:63–72. https://doi.org/10.1016/j.neuroimage.2009.06.060
    OpenUrlCrossRefPubMed
  20. ↵
    1. Grieves RM,
    2. Jeffery KJ
    (2017) The representation of space in the brain. Behav Processes 135:113–131. https://doi.org/10.1016/j.beproc.2016.12.012
    OpenUrlCrossRefPubMed
  21. ↵
    1. Howard LR,
    2. Javadi AH,
    3. Yu Y,
    4. Mill RD,
    5. Morrison LC,
    6. Knight R,
    7. Loftus MM,
    8. Staskute L,
    9. Spiers HJ
    (2014) The hippocampus and entorhinal cortex encode the path and Euclidean distances to goals during navigation. Curr Biol 24:1331–1340. https://doi.org/10.1016/j.cub.2014.05.001
    OpenUrlCrossRefPubMed
  22. ↵
    1. Huth AG,
    2. de Heer WA,
    3. Griffiths TL,
    4. Theunissen FE,
    5. Gallant JL
    (2016) Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532:453–458. https://doi.org/10.1038/nature17637
    OpenUrlCrossRefPubMed
  23. ↵
    1. Ino T,
    2. Doi T,
    3. Hirose S,
    4. Kimura T,
    5. Ito J,
    6. Fukuyama H
    (2007) Directional disorientation following left retrosplenial hemorrhage: a case report with fMRI studies. Cortex 43:248–254. https://doi.org/10.1016/S0010-9452(08)70479-9
    OpenUrlCrossRefPubMed
  24. ↵
    1. Jenkinson M,
    2. Bannister P,
    3. Brady M,
    4. Smith S
    (2002) Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17:825–841. https://doi.org/10.1006/nimg.2002.1132
    OpenUrlCrossRefPubMed
  25. ↵
    1. Kawakami N,
    2. Okada Y,
    3. Morihara K,
    4. Katsuse K,
    5. Kakinuma K,
    6. Matsubara S,
    7. Kanno S,
    8. Suzuki K
    (2024) Long-lasting pure topographical disorientation due to heading disorientation following left retrosplenial infarction: a report of two cases. Brain Cogn 181:106211. https://doi.org/10.1016/j.bandc.2024.106211
    OpenUrlPubMed
  26. ↵
    1. Kennedy B,
    2. Malladi SN,
    3. Tootell RB,
    4. Nasr S
    (2024) A previously undescribed scene-selective site is the key to encoding ego-motion in naturalistic environments (Schlichting ML, Moore T, eds). Elife 13:RP91601. https://doi.org/10.7554/eLife.91601
    OpenUrlPubMed
  27. ↵
    1. Kim M,
    2. Maguire EA
    (2019) Encoding of 3D head direction information in the human brain. Hippocampus 29:619–629. https://doi.org/10.1002/hipo.23060
    OpenUrlCrossRefPubMed
  28. ↵
    1. Kim M,
    2. Jeffery KJ,
    3. Maguire EA
    (2017) Multivoxel pattern analysis reveals 3D place information in the human hippocampus. J Neurosci 37:4270–4279. https://doi.org/10.1523/JNEUROSCI.2703-16.2017
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Koch C,
    2. Li S-C,
    3. Polk TA,
    4. Schuck NW
    (2020) Effects of aging on encoding of walking direction in the human brain. Neuropsychologia 141:107379. https://doi.org/10.1016/j.neuropsychologia.2020.107379
    OpenUrlCrossRefPubMed
  30. ↵
    1. Maguire E
    (2001) The retrosplenial contribution to human navigation: a review of lesion and neuroimaging findings. Scand J Psychol 42:225–238. https://doi.org/10.1111/1467-9450.00233
    OpenUrlCrossRefPubMed
  31. ↵
    1. Maguire EA,
    2. Burgess N,
    3. Donnett JG,
    4. Frackowiak RSJ,
    5. Frith CD,
    6. O’Keefe J
    (1998) Knowing where and getting there: a human navigation network. Science 280:921–924. https://doi.org/10.1126/science.280.5365.921
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Marchette SA,
    2. Vass LK,
    3. Ryan J,
    4. Epstein RA
    (2014) Anchoring the neural compass: coding of local spatial reference frames in human medial parietal lobe. Nat Neurosci 17:1598–1606. https://doi.org/10.1038/nn.3834
    OpenUrlCrossRefPubMed
  33. ↵
    1. McNaughton BL,
    2. Battaglia FP,
    3. Jensen O,
    4. Moser EI,
    5. Moser M-B
    (2006) Path integration and the neural basis of the “cognitive map.”. Nat Rev Neurosci 7:663–678. https://doi.org/10.1038/nrn1932
    OpenUrlCrossRefPubMed
  34. ↵
    1. Naselaris T,
    2. Kay KN,
    3. Nishimoto S,
    4. Gallant JL
    (2011) Encoding and decoding in fMRI. Neuroimage 56:400–410. https://doi.org/10.1016/j.neuroimage.2010.07.073
    OpenUrlCrossRefPubMed
  35. ↵
    1. Nau M,
    2. Navarro Schröder T,
    3. Frey M,
    4. Doeller CF
    (2020) Behavior-dependent directional tuning in the human visual-navigation network. Nat Commun 11:3247. https://doi.org/10.1038/s41467-020-17000-2
    OpenUrlCrossRefPubMed
  36. ↵
    1. Nishimoto S,
    2. Vu AT,
    3. Naselaris T,
    4. Benjamini Y,
    5. Yu B,
    6. Gallant JL
    (2011) Reconstructing visual experiences from brain activity evoked by natural movies. Curr Biol 21:1641–1646. https://doi.org/10.1016/j.cub.2011.08.031
    OpenUrlCrossRefPubMed
  37. ↵
    1. Nunez-Elizalde AO,
    2. Huth AG,
    3. Gallant JL
    (2019) Voxelwise encoding models with non-spherical multivariate normal priors. Neuroimage 197:482–492. https://doi.org/10.1016/j.neuroimage.2019.04.012
    OpenUrlCrossRefPubMed
  38. ↵
    1. O’Keefe J,
    2. Nadel L
    (1978) The hippocampus as a cognitive map. Oxford; New York: Clarendon Press; Oxford University Press.
  39. ↵
    1. Reuter M,
    2. Rosas HD,
    3. Fischl B
    (2010) Highly accurate inverse consistent registration: a robust approach. Neuroimage 53:1181–1196. https://doi.org/10.1016/j.neuroimage.2010.07.020
    OpenUrlCrossRefPubMed
  40. ↵
    1. Sherrill KR,
    2. Chrastil ER,
    3. Ross RS,
    4. Erdem UM,
    5. Hasselmo ME,
    6. Stern CE
    (2015) Functional connections between optic flow areas and navigationally responsive brain regions during goal-directed navigation. Neuroimage 118:386–396. https://doi.org/10.1016/j.neuroimage.2015.06.009
    OpenUrlCrossRefPubMed
  41. ↵
    1. Shine JP,
    2. Valdés-Herrera JP,
    3. Hegarty M,
    4. Wolbers T
    (2016) The human retrosplenial cortex and thalamus code head direction in a global reference frame. J Neurosci 36:6371–6381. https://doi.org/10.1523/JNEUROSCI.1268-15.2016
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Spiers HJ,
    2. Maguire EA
    (2006) Thoughts, behaviour, and brain dynamics during navigation in the real world. Neuroimage 31:1826–1840. https://doi.org/10.1016/j.neuroimage.2006.01.037
    OpenUrlCrossRefPubMed
  43. ↵
    1. Spiers HJ,
    2. Maguire EA
    (2007) A navigational guidance system in the human brain. Hippocampus 17:618–626. https://doi.org/10.1002/hipo.20298
    OpenUrlCrossRefPubMed
  44. ↵
    1. Takahashi N,
    2. Kawamura M,
    3. Shiota J,
    4. Kasahata N,
    5. Hirayama K
    (1997) Pure topographic disorientation due to right retrosplenial lesion. Neurology 49:464–469. https://doi.org/10.1212/WNL.49.2.464
    OpenUrlCrossRefPubMed
  45. ↵
    1. Taube JS
    (2007) The head direction signal: origins and sensory-motor integration. Annu Rev Neurosci 30:181–207. https://doi.org/10.1146/annurev.neuro.29.051605.112854
    OpenUrlCrossRefPubMed
  46. ↵
    1. Tsitsiklis M, et al.
    (2020) Single-neuron representations of spatial targets in humans. Curr Biol 30:245–253.e4. https://doi.org/10.1016/j.cub.2019.11.048
    OpenUrlCrossRefPubMed
  47. ↵
    1. Tustison NJ,
    2. Avants BB,
    3. Cook PA,
    4. Zheng Y,
    5. Egan A,
    6. Yushkevich PA,
    7. Gee JC
    (2010) N4ITK: improved N3 bias correction. IEEE Trans Med Imaging 29:1310–1320. https://doi.org/10.1109/TMI.2010.2046908
    OpenUrlCrossRefPubMed
  48. ↵
    1. Vass LK,
    2. Epstein RA
    (2013) Abstract representations of location and facing direction in the human brain. J Neurosci 33:6133–6142. https://doi.org/10.1523/JNEUROSCI.3873-12.2013
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Vass LK,
    2. Epstein RA
    (2017) Common neural representations for visually guided reorientation and spatial imagery. Cereb Cortex 27:1457–1471. https://doi.org/10.1093/cercor/bhv343
    OpenUrlCrossRefPubMed
  50. ↵
    1. Zhang Y,
    2. Brady M,
    3. Smith S
    (2001) Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imaging 20:45–57. https://doi.org/10.1109/42.906424
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 45 (34)
Journal of Neuroscience
Vol. 45, Issue 34
20 Aug 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
A Neural Compass in the Human Brain during Naturalistic Virtual Navigation
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
A Neural Compass in the Human Brain during Naturalistic Virtual Navigation
Zhengang Lu, Joshua B. Julian, Geoffrey K. Aguirre, Russell A. Epstein
Journal of Neuroscience 20 August 2025, 45 (34) e1765242025; DOI: 10.1523/JNEUROSCI.1765-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
A Neural Compass in the Human Brain during Naturalistic Virtual Navigation
Zhengang Lu, Joshua B. Julian, Geoffrey K. Aguirre, Russell A. Epstein
Journal of Neuroscience 20 August 2025, 45 (34) e1765242025; DOI: 10.1523/JNEUROSCI.1765-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • encoding model
  • fMRI
  • heading direction
  • retrosplenial complex; spatial cognition; superior parietal lobe

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Increased perceptual reliability reduces membrane potential variability in cortical neurons
  • Synergistic geniculate and cortical dynamics facilitate a decorrelated spatial frequency code in the early visual system
  • Is the whole the sum of its parts? Neural computation of consumer bundle valuation in humans
Show more Research Articles

Behavioral/Cognitive

  • Increased perceptual reliability reduces membrane potential variability in cortical neurons
  • Synergistic geniculate and cortical dynamics facilitate a decorrelated spatial frequency code in the early visual system
  • Is the whole the sum of its parts? Neural computation of consumer bundle valuation in humans
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.