Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Post-Saccadic Face Processing Is Modulated by Pre-Saccadic Preview: Evidence from Fixation-Related Potentials

Antimo Buonocore, Olaf Dimigen and David Melcher
Journal of Neuroscience 11 March 2020, 40 (11) 2305-2313; DOI: https://doi.org/10.1523/JNEUROSCI.0861-19.2020
Antimo Buonocore
1Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, 72076 Tübingen, Germany,
2Hertie Institute for Clinical Brain Research, University of Tübingen, 72076 Tübingen, Germany,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Antimo Buonocore
Olaf Dimigen
3Department of Psychology, Humboldt-Universität zu Berlin, 10099 Berlin, Germany,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Olaf Dimigen
David Melcher
4Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy, and
5Department of Psychology, University of Massachusetts Boston, Boston, Massachusetts 02125
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for David Melcher
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Humans actively sample their environment with saccadic eye movements to bring relevant information into high-acuity foveal vision. Despite being lower in resolution, peripheral information is also available before each saccade. How the pre-saccadic extrafoveal preview of a visual object influences its post-saccadic processing is still an unanswered question. The current study investigated this question by simultaneously recording behavior and fixation-related brain potentials while human subjects made saccades to face stimuli. We manipulated the relationship between pre-saccadic “previews” and post-saccadic images to explicitly isolate the influences of the former. Subjects performed a gender discrimination task on a newly foveated face under three preview conditions: scrambled face, incongruent face (different identity from the foveated face), and congruent face (same identity). As expected, reaction times were faster after a congruent-face preview compared with a scrambled-face preview. Importantly, intact face previews (either incongruent or congruent) resulted in a massive reduction of post-saccadic neural responses. Specifically, we analyzed the classic face-selective N170 component at occipitotemporal electroencephalogram electrodes, which was still present in our experiments with active looking. However, the post-saccadic N170 was strongly attenuated following intact-face previews compared with the scrambled condition. This large and long-lasting decrease in evoked activity is consistent with a trans-saccadic mechanism of prediction that influences category-specific neural processing at the start of a new fixation. These findings constrain theories of visual stability and show that the extrafoveal preview methodology can be a useful tool to investigate its underlying mechanisms.

SIGNIFICANCE STATEMENT Neural correlates of object recognition have traditionally been studied by flashing stimuli to the central visual field. This procedure differs in fundamental ways from natural vision, where viewers actively sample the environment with eye movements and also obtain a low-resolution preview of soon-to-be-fixated objects. Here we show that the N170, a classic electrophysiological marker of the structural encoding of faces, also occurs during a more natural viewing condition but is strongly reduced due to extrafoveal preprocessing (preview benefit). Our results therefore highlight the importance of peripheral vision during trans-saccadic processing in building a coherent and stable representation of the world around us.

  • active prediction
  • eye movements
  • fixation related potentials
  • N170
  • visual stability

Introduction

Visual processing takes place primarily during periods of fixation, which are separated by fast eye movements known as saccades. Unlike in laboratory experiments, in which stimuli appear suddenly, the image present on the fovea during natural viewing is typically the result of a choice to fixate that item based on a peripheral preview of that object. Whether this peripheral preview influences visual processing during the new fixation, and how this might fit into competing theories regarding why visual perception seems stable and continuous across saccades, remains an important question.

One set of theories of visual stability emphasizes the role of a prediction about which visual information will be available after the saccade (for review, see Melcher and Colby, 2008; Melcher, 2011). In reading, a classic behavioral finding is the preview benefit effect (Rayner, 1975): when an upcoming word was visible in extrafoveal vision before a saccade, subsequent fixations on the word are shorter compared with an invalid preview condition. For other complex visual objects, such as faces, there is also evidence that pre-saccadic information can influence post-saccadic percepts (Melcher, 2005; Wolfe and Whitney, 2014) and facilitate the post-saccadic processing of the previewed stimulus (Edwards et al., 2018).

In terms of neural mechanisms, post-saccadic visual processing might be facilitated by neurons that change their tuning toward the “future receptive field” even before the eye movement occurs, a process called predictive remapping (Duhamel et al., 1992; Melcher and Colby, 2008; Melcher, 2011). This prediction signal might be the result of feedback connections between higher-level visual areas anticipating the post-saccadic responses in lower visual areas as well as feedforward connections transmitting information that was not predicted, i.e., the prediction errors (Srinivasan et al., 1982; Rao and Ballard, 1999; Clark, 2013). Together, these neural mechanisms might support trans-saccadic predictions.

There is converging evidence for a reduction in neural responses when a stimulus is predictable compared with when it is unexpected (for review, see de Lange et al., 2018). When looking at fixation-related brain potentials (fERPs), the behavioral preview benefits in reading are associated with a reduction of the evoked, word-specific neural response, an effect termed “preview positivity” (Dimigen et al., 2012; Kornrumpf et al., 2016). Importantly, in reading, preview positivity effects are much stronger when readers execute a saccade toward a word (active condition) compared with control conditions with passive extrafoveal stimulation (Kornrumpf et al., 2016). Along these lines, recent fMRI studies have shown a reduction in BOLD response when the features and location of a stimulus are consistent across a saccade (Dunkley et al., 2016; Zimmermann et al., 2016; Fairhall et al., 2017). These preview effects can be similarly explained by prediction mechanisms or by repetition suppression, i.e., the dampening of a neural signal when a stimulus category is viewed before (peripherally) and after (foveally) the eye movement. It is still debated whether repetition suppression effects represent a signature of prediction (Rostalski et al., 2019) or not (Tang et al., 2018).

An alternative view of visual stability focuses on the role of the spatial shift of attention toward the peripheral target before saccade execution (Hoffman and Subramaniam, 1995; Deubel and Schneider, 1996; Zhao et al., 2012; Buonocore et al., 2017), with this attentional shift playing a preeminent role (Mathôt and Theeuwes, 2011; Melcher, 2011). The key idea is that selective attention is already present at the beginning of the new fixation, leading to attentional facilitation of post-saccadic processing (for review, see Mathôt and Theeuwes, 2011). In contrast to prediction, which typically results in reduced evoked responses, selective attention tends to amplify neural responses (for review, see Thiele and Bellgrove, 2018). In the case of face stimuli, for example, selective attention enhances evoked responses in the electroencephalogram (EEG; Mohamed et al., 2009; Sreenivasan et al., 2009; Churches et al., 2010).

Testing whether there is a decrease in neural activity (due to prediction) versus an increase (due to attention) has, therefore, been suggested to be an important marker to differentiate between these two mechanisms (Kok et al., 2012; Spaak et al., 2016; de Lange et al., 2018). This raises the question of what happens in the case of preview effects with saccades. If the shift in attention plays a preeminent role in post-saccadic visual processing, then post-saccadic fixation-related ERPs would be expected to be larger in amplitude when a salient preview was available, due to the target receiving attentional enhancement. The aim of the current study was to investigate whether a peripheral preview of a face image would influence post-saccadic processing of that face and, if so, whether it would lead to an increase (attention) or reduction (via prediction or repetition suppression) of the neural response.

Materials and Methods

Participants.

Fifteen participants (10 females, age range: 20–31 years, M = 24.1) who reported no neurological or visual impairments were included in the data analysis. Two additional participants were recorded but had to be excluded based on their behavioral performance (i.e., excessive trial loss >60%; see Behavioral screening and analysis). The experiment was conducted in accordance with the Declaration of Helsinki (2008) and approved by the University of Trento Research Ethics Committee. Participants provided informed written consent and received a compensation of €10 per hour.

Apparatus.

Stimuli were presented on a 24-inch LED monitor (resolution: 1920 × 1080 pixels, subtending 44° × 25.9°) at a vertical refresh of 120 Hz. To reduce head movements, participants were seated with their head stabilized by a chin and forehead rest. The eyes were horizontally and vertically aligned with the center of the screen at a viewing distance of 63 cm. Eye movements were recorded with a video-based eye tracker (EyeLink 1000 with desktop mount; SR Research) at a sampling rate of 1000 Hz (detection algorithm: pupil and corneal reflex; thresholds for saccade detection: 30°/s velocity and 9500°/s2 acceleration). A five-point calibration and validation of the eye tracker on a standard rectangular grid was run at the beginning of the experiment and whenever necessary during the experiment. Programs for stimulus presentation and data collection were written in MATLAB (MathWorks) using the Psychophysics Toolbox v3 (Brainard, 1997; Pelli, 1997) and EyeLink Toolbox extensions (Cornelissen et al., 2002). Participants' manual responses were recorded on a standard keyboard.

The EEG was recorded from 64 Ag/AgCl electrodes (Brain Products) placed at standard locations of the International 10-10 system. Signals were recorded with a time constant of 10 s and a high cutoff of 250 Hz, referenced online against the left mastoid, and digitized at a rate of 1000 Hz. The system was set up with a parallel port splitter so that trigger pulses were sent simultaneously to the EyeLink and EEG acquisition computers.

Procedure.

Participants were seated in a dimly lit room and then briefly familiarized with the task by the experimenter. Figure 1 illustrates the trial scheme. Participants started each trial by pressing the space bar while maintaining their gaze at a central fixation cross (0.5° wide, shown in white on a black background). One second after this button press, two circular placeholders (white rings, diameter 4°, line width 1 pixel) appeared to the left and right of the central fixation cross. Placeholders were centered at eccentricities of ±8° and indicated the positions of the upcoming preview stimuli. Once the eye tracker detected a stable fixation for 1000 ms within an area of 2° around the central fixation cross, the preview display was triggered. Depending on the condition, the preview display consisted either of two different scrambled faces (scrambled-face preview condition) or two different intact faces (intact-face preview condition) that appeared at the previous positions of the placeholders (Fig. 1, Preview). After 500 ms of preview, the fixation cross changed its color and turned either green or red, thereby cueing the participant to execute a saccade toward the left or right stimulus, respectively (Fig. 1, Saccade cue). Participants were instructed to respond as quickly and accurately as possible to the cue with a single saccade. Saccadic reaction times (SRTs) were defined as the interval between cue onset and the onset of the first saccade executed toward the peripheral target.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Trial scheme. At the beginning of each trial, participants fixated a central fixation cross for 1000 ms. Afterward, two placeholders appeared in the periphery at ±8° to the left and right of fixation (Placeholders). After 1000 ms, two preview stimuli appeared at the position of the placeholders for 500 ms (Preview). These stimuli could be either scrambled-faces (blue outline) or intact-faces (dashed green/pink outline). After the preview interval, the central cross turned either green (left) or red (right), thereby cueing the participant to execute a saccade toward the left of right placeholder, respectively (Saccade cue). During the saccade, the preview was first changed into a scrambled image patch for one display cycle (8.3 ms) to introduce a peri-saccadic transient in all three conditions (Transient). Afterward, the stimulus changed to the target face in all conditions (Target). The relationship between the preview stimulus and the target face yielded three conditions for the behavioral and fixation-related EEG analysis: a scrambled preview condition (blue outline), an incongruent preview condition (green outline; different face seen before and after saccade), and a congruent preview condition (pink outline; same face seen before and after saccade). Participants were asked to discriminate the gender (male/female) of the face visible after the saccade with a button press. Note that stimuli are not drawn to scale.

During the saccade, once gaze position crossed an invisible vertical boundary placed a distance of 1° from the fixation cross, a scrambled version of the preview face (that was always different from those shown as previews in the scrambled-face preview condition) was transiently presented for just a single display cycle (8.3 ms; Fig. 1, Transient). The purpose of this gaze-contingent display change was to introduce an intra-saccadic visual transient in all experimental conditions, that is, also in the congruent-face preview condition in which the same face was presented before and after the saccade. After the transient was displayed, and still during the saccade, the preview stimulus always changed into an intact face (Fig. 1, Target).

Participants then responded with a button press whether the face that they had landed on with their eyes was male or female. Responses were given with the index fingers of the left and right hand using two keyboard buttons. Manual reaction time (RT) was defined as the interval between the saccade-contingent presentation of the target face (triggered by the saccade toward the face) and the button press. With this methodology, any potential difference in SRTs was excluded from the computation of manual RTs.

The experimental design comprised three main conditions: scrambled-face preview, incongruent-face preview, and congruent-face preview (Fig. 1, Preview). Each condition comprised 160 trials, leading to a total of 480 trials. Conditions differed in terms of the stimulus shown before the saccade (preview stimulus). In the scrambled-face preview condition, the stimuli presented during the preview interval were scrambled faces. In contrast, in both the incongruent- and congruent-face preview conditions, the stimuli shown as previews were intact faces. After the saccade, participants always looked at a face as the target stimulus. This means that in the scrambled-face preview condition, the scrambled face shown as a preview changed into a face.

In the incongruent-face preview condition, the target face shown after the saccade was different from the preview face seen before the saccade (in this condition, the face shown at the irrelevant screen location opposite the cued saccade direction remained the same). This incongruent-face preview condition included both pure changes in facial identity without a change of the gender (i.e., male-to-different-male, female-to-different-female) as well as changes in both identity and gender (male-to-female, female-to-male) in equal proportion. Finally, in the congruent-face preview condition, the target stimulus was identical to the face presented at this position before the saccade. The face seen after the saccade was equiprobably male and female and the gender of the target face was counterbalanced with the preview condition.

Stimuli.

Forty-two grayscale images were selected from the Nottingham face database (http://pics.stir.ac.uk/zips/nottingham.zip), each showing a frontal view of a face (21 female, 21 male) with a neutral facial expression. To standardize the images and to reduce differences between the genders, a black mask with a circular aperture was applied to each face to cover the external facial features (e.g., hair; Fig. 1). The aperture was centered on the nose, spanned from the forehead to the chin, and subtended a diameter of 4° of visual angle at the viewing distance of 63 cm.

For each original face stimulus, we also generated a scrambled counterpart that was used as the pre-saccadic preview stimulus in the scrambled-face preview condition (see Procedure). For this purpose, we calculated the 2D Fourier transform of each face image and then added a matrix of random phase angles to the existing phase information of the image. We then performed an inverse Fourier transform, thereby preserving the original power spectrum of the image. The same circular aperture as for the intact faces was also applied to the scrambled images.

Finally, for each face image, we selected a second face stimulus that served as the saccade target in the condition with an incongruent preview as well as a third scrambled-face stimulus, which was used as a transient during the saccade. Specifically, to control for low-level differences between the face stimuli shown before and after the saccade, we randomly selected for each image another face stimulus from the pool of 42 face images, such that their difference in average image luminance (estimated via their RGB gray values) was <4% (i.e., difference <11 in 8-bit gray values) and not statistically significant (as confirmed by a one-way ANOVA). In addition, possible differences in image luminance between the stimulus shown before and after the saccade were also controlled by adding luminance as a predictor in the statistical analysis of the EEG (see Single-subject GLM).

Behavioral screening and analysis.

In an initial analysis step, trials were screened for incorrect oculomotor behavior. Specifically, we removed all trials in which no saccade was executed toward either stimulus (0.1% of trials) or an eye blink occurred around the time of saccade execution (−200 to 600 ms around saccade onset; 1.1%). Furthermore, we removed trials in which the eyes deviated from the central fixation cross by >2° during the preview interval (1.9%), the saccadic reaction time was extremely short (<100 ms; 0.8%) or long (>530 ms; 19.6%), saccade amplitude was extremely small (<3°; 1.9%) or large (>10°; 2.6%), or in which the saccade went in the wrong direction (5.5%). Finally, we excluded trials in which the saccade-contingent display change was triggered prematurely by drift movements or microsaccades during the preview interval (0.2%) or in which the main saccade to the target was followed by a secondary saccade >3° within ≤150 ms (0.2%).

Manual RTs and response accuracies in the gender discrimination task were then submitted to repeated-measures ANOVAs on the three-level factor Preview (the incongruent-face preview condition included both changes in gender, i.e., male-to-female and female-to-male, and changes in identity only, i.e., male-to-different-male or female-to-different-female). For the analysis of the button presses, trials with an extreme manual RT (<200 or >1000 ms) were ignored as outliers. Furthermore, one participant was dropped from the manual RT analysis because of very slow manual RTs and therefore too few remaining trials.

Electrophysiological data analysis.

For the electrophysiological analysis, the EEG was first synchronized with the eye-tracking channels based on the shared trigger pulses using the EYE-EEG toolbox (Dimigen et al., 2011). The synchronized EEG was then downsampled to 500 Hz, bandpass-filtered from 0.1 to 40 Hz (passband edges) using EEGLAB's (Delorme and Makeig, 2004) finite response filter (pop_eegfiltnew.m) with default settings, and digitally re-referenced to an average reference. In the next step, ocular EEG artifacts were removed using an optimized eye-tracker-guided variant of Infomax ICA in EEGLAB. To optimize the ICA decomposition and the suppression of the myogenic spike potential peaking at saccade onset (Keren et al., 2010), the ICA was trained on a copy of the data high-pass filtered at 2 Hz (Winkler et al., 2015) in which EEG sampling points occurring around saccade onsets (−20 to +10 ms) were overweighted (Dimigen, 2020). The resulting unmixing weights computed on this high-pass filtered and optimized training data were then applied to the original unfiltered recording, and ocular components were automatically flagged using the eye tracker-guided procedure by Plöchl et al. (2012) with the saccade–fixation variance ratio threshold set to 1.1.

Based on the trials with correct oculomotor behavior, we then extracted two sets of 1000-ms-long epochs (−300 to 700 ms) from the artifact-corrected EEG. The first set was cut around the onset of the preview stimuli on the screen [traditional event-related potential (ERP) average]. The second set was cut around the onset of the first fixation on the target face following the saccade (fERP average). To exclude segments with residual non-ocular artifacts, we removed all epochs containing peak-to-peak voltage differences >120 μV in any channel (2.3% of ERP and 2.8% of fERP epochs). Epochs were then baseline-corrected by subtracting the mean channel voltages in the 200 ms interval before stimulus/fixation onset, respectively.

Single-subject GLM (first-level analysis).

Stimulus- and fixation-related potentials were analyzed using a mass univariate model (Smith and Kutas, 2015a) in which a GLM was fitted on each electrode and time point separately using the unfold toolbox (Ehinger and Dimigen, 2019). Analysis of EEG data with mass univariate models has advantages in terms of higher sensitivity (Rousselet et al., 2011; Smith and Kutas, 2015a) and allows to control for the effects of continuous covariates on the waveform. For ERPs, the model only contained the intercept term and one categorical predictor coding whether the preview stimuli consisted of two scrambled (0) or two intact faces (1). For the fERP analysis, the predictors in the regression model were a three-level categorical predictor coding the type of preview shown before the saccade (scrambled, incongruent, congruent) as well as two continuous linear covariates: saccade amplitude and the preview-target luminance difference. Saccade amplitude (in degrees of visual angle) was added to the model because the size of the incoming saccade has a well established and strong influence on the amplitude of the post-saccadic neural response (Thickbroom et al., 1991; Dandekar et al., 2012). Including saccade amplitude as a nuisance variable in the model therefore controlled for slight difference in incoming saccade amplitude (∼0.3°; see Results) between preview conditions. In addition, we also found that the fERP was modulated by the difference in mean luminance between the stimulus shown as preview and the post-saccadic target. The mean luminance difference between both stimuli was therefore also included as a continuous covariate.

As a control analysis, we repeated our analysis of the fERP using a GLM-based linear deconvolution technique (also called continuous-time regression; Dandekar et al., 2012; Smith and Kutas, 2015b; Ehinger and Dimigen, 2019) that is also implemented in the unfold toolbox. In the current experiment, SRTs were ∼30 ms longer for the scrambled-face preview than for the intact-face preview conditions (see Results). This means that the temporal overlap between the ERP evoked by the onset of the saccade cue (red/green fixation cross) and the fERP evoked at saccade offset differed systematically between conditions, potentially biasing the results. GLM-based deconvolution allows us to control this overlapping activity by modeling the response to both types of events (cue and fixation onset) in the same statistical model. However, because the results were virtually identical to those obtained with the simpler mass univariate model, we only report the results of the latter here.

Group statistics (second-level analysis).

Second-level statistical analyses were performed using the threshold-free cluster enhancement method (TFCE; Smith and Nichols, 2009; Mensen and Khatami, 2013), a permutation test (Maris and Oostenveld, 2007), which controls for multiple testing across electrodes and time points without the need to define an arbitrary cluster-forming threshold. Analyses were run using the MATLAB implementation of TFCE (http://github.com/Mensen/ept_TFCE-matlab) based on 2000 random permutations. For ERPs, we compared the response following an intact-face versus scrambled-face preview. For fERPs, we used the ANOVA variant of the TFCE algorithm, followed-up by Bonferroni-corrected pairwise comparisons between the three preview conditions, again using the TFCE method. For visualization of the TFCE results in Figures 2 and 4, p values were thresholded at p < 0.05, p < 0.01, and p < 0.005.

Results

In the following, we first report the neural response evoked by the onset of the preview stimuli (ERP to the pair of intact vs scrambled faces). This is followed by an analysis of the behavior and fERP to the post-saccadic face stimulus.

Preview stimulus onset: evoked response (ERP)

The goal of this analysis was to ensure that our stimuli were effective in eliciting typical face-related ERP components. Figure 2A shows the scalp-topographic difference maps of the difference between extrafoveal intact-face previews (i.e., 2 faces presented bilaterally at ±8° eccentricity) minus scrambled-face previews (2 scrambled faces presented at ±8°). Topographies are shown at three latencies after preview stimulus onset, corresponding to the peaks of the P1 (124 ms), N1 (226 ms), and P3 (350 ms) components. White dots in the scalp maps indicate electrodes, which showed significant differences between intact- and scrambled-face previews at the given latency (in a pairwise TFCE-based t test). Grand-mean waveforms in Figure 2B show the stimulus-ERP elicited by the onset of the bilateral preview display, averaged across two occipitotemporal electrodes over the left (PO7) and right hemisphere (PO8).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

ERPs aligned to the onset of the preview display. A, Topographic difference maps of intact-face previews minus scrambled-face previews for three latencies after stimulus onset that represent the peak latencies of the P1, N1, and P300, respectively. White dots represent electrodes that show significant differences between the two preview conditions in the TFCE statistic at this latency. B, Grand-mean stimulus-locked ERP, averaged over occipitotemporal electrodes PO7 and PO8 for intact-face previews (green/pink) and scrambled-face previews (blue). C, Results of the TFCE statistic comparing face- and scrambled-face previews at all time points and channels. For visualization, p values are thresholded at 0.05, 0.01, and 0.005 with different shades of blue.

At the earlier latencies, during the P1 component, there was not yet a clear difference between the ERP responses for the two types of stimuli (intact vs scrambled faces) beside a small cluster of activation at right frontocentral sites. However, in the following N1 time window, a strong bilateral negativity emerged at occipital-temporal electrode sites that was slightly larger over the right hemisphere, as typical for N170 face effects (Eimer, 2011). Over frontocentral sites, the posterior N170 effect was accompanied by a corresponding “vertex-positive potential”; a broad positive potential generally taken to reflect the positive poles of a bilateral dipole pair generating the occipitotemporal N170 (Eimer, 2011). These results clearly show how the bilateral presentation of the face preview (dashed green/pink line) led to a markedly different evoked response than that of the scrambled-face images (blue line; Fig. 2B); with faces eliciting a much more pronounced occipital-temporal N170 component (Halgren et al., 2000; Hoshiyama et al., 2003; Deffke et al., 2007; Gao and Wilson, 2013). In contrast, only a smaller frontocentral cluster was observed during the earlier P1 component (Fig. 2A). With a peak at ∼226 ms, the N170 reached its peak ∼50 ms later than typically observed (Bentin et al., 1996). A likely reason for this delay is that the two face stimuli were presented bilaterally in the extrafoveal visual field, rather than in the fovea. By looking at the full matrix of TFCE p values depicted in Figure 2C, it is clear how clusters of significant activation arose at ∼160 ms after stimulus onset, both over frontal-central and occipitotemporal areas. Although the difference between the intact-face and scrambled-face preview condition reached its maximum after 226 ms, this effect remained topographically stable and statistically significant throughout the entire stimulus-locked analysis period (i.e., until 600 ms after stimuli onset).

Preview effects: behavioral results

Figure 3 summarizes behavioral performance in the task. A first finding is that saccadic reaction times were affected by the preview condition: SRTs were ∼30 ms faster in trials with an intact compared with a scrambled-face preview (intact vs scrambled: t(14) = −4.673; p < 0.0004; Fig. 3, left). The same pattern was also reflected in saccade amplitudes, which were slightly larger (∼0.3°) when the preview was an intact rather than a scrambled face (t(14) = 8.259; p < 0.000001; Fig. 3, center). This pattern of results indicates that seeing a possible target stimulus, i.e., a face, in the periphery enhanced the preparation of the oculomotor response toward the target.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Behavioral results. The average saccadic reaction time (left panel), saccadic amplitude (center panel), and manual RT (right panel) for the scrambled-, incongruent-, and congruent-face preview condition, respectively. *p < 0.05. Error bars denote ±1 SEM.

For the gender discrimination task following the saccade, response accuracy was generally high (89% correct) and did not differ between preview conditions: F(2,26) = 0.475 p = 0.627. However, like SRTs, manual RTs for the button press depended strongly on the preview condition (main effect: F(2,26) = 8.535 p < 0.001) with numerically shorter RTs observed in the two conditions in which a congruent- or an incongruent face was shown as a preview compared with the scrambled-face condition (Fig. 3, right). Bonferroni-corrected post hoc tests confirmed that congruent face previews produced significantly shorter RTs than scrambled previews: t(13) = −3.802; p < 0.007 (Bonferroni). Importantly, this effect replicates the classic trans-saccadic preview benefit also observed with other types of stimuli, in particular words (Rayner, 1975). When the preview was an incongruent face, there was a only a statistical trend for faster RTs compared with the scrambled-preview condition: t(13) = −2.546; p < 0.07 (Bonferroni). Manual RTs did not differ significantly between the congruent and incongruent preview condition. We also tested whether, within the incongruent-preview condition, manual RTs in the gender discrimination task differed according to whether the gender of the faces remained the same across the saccade (i.e., male-to-different-male and female-to-different-female change) or not (male-to-female and female-to-male change). Although participants responded numerically faster (by M = 15.2 ms) if the gender remained the same across the saccade, this difference was not significant (t(13) = 1.227, p = 0.2416) indicating that participants did not benefit more from correct-gender previews. For this reason, we did not differentiate between these sub-conditions in the subsequent analyses.

Together, these results replicate a robust trans-saccadic benefit for previewed human faces compared with a non-informative scrambled-preview condition. Both the initial oculomotor response toward the peripheral face as well as the subsequent foveal processing of the facial features (necessary for the gender discrimination task) were significantly enhanced if the extrafoveal preview provided before the saccade was also a human face, supporting the hypothesis of preview facilitation for the processing of face stimuli.

Preview effects: evoked response (fERP)

The main goal of the current study was to compare the fixation-related brain response elicited by the first direct fixation on the target face as a function of the extrafoveal information available during the preceding fixation: a scrambled face, a different person's face, or the same face. Figure 4 summarizes the fERP elicited by the first direct fixation on the target face after the end of the critical saccade. Figure 4A shows the topographic difference maps for the three contrasts at the peaks of the fixation-related P1 (106 ms), N1 (180 ms), and P3 (350 ms) components. Figure 4B shows the corresponding fERP waveforms, averaged again across occipitotemporal electrodes PO7 and PO8. Figure 4 presents the corresponding statistical comparison (TFCE) between the congruent- and the scrambled-face preview conditions (4C) and between the incongruent- and the scrambled-face preview conditions (4D).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Fixation-related potentials (fERP). A, Topographic difference maps for the difference between the congruent minus incongruent (top row), incongruent minus scrambled (middle row) and congruent minus scrambled preview condition (bottom row) at three latencies after fixation onset on the target face. The latencies correspond to the P1, N1, and P300, respectively. B, Grand-mean fERP averaged across occipitotemporal electrodes PO7 and PO8 for the scrambled (blue), incongruent (green), and congruent face preview condition (pink). Note that the three conditions only differ in terms of the stimulus seen before the saccade, whereas the target face fixated at time 0 of this plot was the same in all three conditions. C, D, TFCE results for the pairwise comparison between the congruent- and scrambled-face preview condition and the incongruent- and scrambled-face preview condition, respectively.

The first interesting observation is that when contrasting the activity following a congruent compared with an incongruent-face preview (Fig. 4A, top row), there was no sign of a significant difference across the entire scalp at any time point. In Figure 4A, second and third rows, we contrasted the activity of the incongruent- or congruent-face, respectively, against the scrambled-face preview. Differently from the previous comparison, it is now evident that seeing an intact face rather than a scrambled-face stimulus in the periphery led to a completely different response pattern at the time of the new fixation, once the target face was foveated. Whereas the fixation-related P1 did not differ between conditions, the following N1 was strongly influenced by the type of preview visible during the preceding fixation. In particular, we report a strong attenuation of the fixation-related N170 in the conditions in which a congruent or incongruent face was visible before the saccade. This effect was more pronounced over the right hemisphere, with a corresponding negative pole over central frontal regions, congruent with the activation pattern observed for the ERP time-locked to stimulus onset (see previous section).

This is especially clear by looking at the three waveforms for the electrodes PO7/PO8, whereby the incongruent- (green) and congruent- (pink) face preview showed a strong reduction in the post-saccadic response at the time of the N170, i.e., a preview positivity effect compared with the scrambled-face preview (Fig. 4B, blue). Figure 4C visualizes the p-value matrix for the contrast between the congruent-face minus scrambled-face preview condition across the entire epoch. This plot suggests that the preview positivity began around ∼160 ms and persisted up to ∼300 ms after fixation onset. This was then followed by a later and weaker cluster of activation between ∼360–420 ms, which shared a similar scalp topography as the initial N170 effect. In Figure 4D, we also report the TFCE p-value matrix for the contrast between the incongruent-face versus the scrambled-face preview condition. Landing on a different face from the one that was available during the preview led to an almost identical pattern of activation over all the electrodes for the entire epoch as in the congruent preview condition.

In a follow-up analysis, we also tested whether the preview positivity effect in fERPs was modulated by saccadic response time. For this purpose, the mass univariate model was expanded to include an additional predictor dummy-coding whether the SRT in the trial was below (0) or above (1) the participant's median SRT and this predictor was allowed to interact with Preview. The TFCE statistic provided no evidence for a significant interaction (all p values > 0.05).

Discussion

Face and object recognition have traditionally been studied by flashing stimuli to the central visual field during fixation. In contrast, natural vision typically affords an extrafoveal preview of soon-to-be fixated items before they are brought into the fovea by a saccade. Here we show that the extrafoveal preview of a face stimulus leads to a strong reduction in the post-saccadic evoked response compared with a control condition in which a meaningful preview was withheld by scrambling its spectral phase. In particular, the N170 component, which is classically linked to the structural processing of faces, was substantially reduced in trials with a face preview compared with those with a scrambled preview. These results are consistent with a “preview benefit” (i.e., reduction) in the evoked response as previously observed for visual words (Dimigen et al., 2012) and, more generally, with the notion that information about the saccadic target can influence post-saccadic processing (Edwards et al., 2018; Ehinger et al., 2015; for review, see Melcher and Colby, 2008; Melcher and Morrone, 2015).

Given that face processing and visual word recognition both involve highly specialized processing streams, it is perhaps not surprising that the timing and magnitude of our effects differ from the preview positivity effect in reading (Dimigen et al., 2012; Kornrumpf et al., 2016). For faces, the effect peaked earlier than that reported previously for words, was two to three times larger at occipitotemporal channels and also lateralized differently, with a stronger effect over the right (rather than left) hemisphere.

Current theories suggest that face processing involves several stages of neural processing that differ in terms of their feature-selectivity, neural substrate, and associated ERP components. The occipital face area (OFA) has been implicated in processing parts of faces, such as eyes or mouth. The P100 shows similar modulations, suggesting a link with OFA (Pitcher et al., 2007; Sadeh et al., 2010). The fusiform face area (FFA) is associated with configural or holistic face processing and linked to the N170 (Halgren et al., 2000; Hoshiyama et al., 2003; Deffke et al., 2007; Gao and Wilson, 2013). Here we found preview effects only after the end of the P1 component, ∼160 ms after fixation onset, which is consistent with an effect at the level of structural encoding of the face (Sadeh et al., 2010). Likewise, the lack of difference between congruent and incongruent previews is consistent with processing at the level of facial configuration rather than specific local features.

Beyond the N170 effect, a relative positivity for intact-face previews persisted throughout the later parts of the fERP epoch (as well as in the stimulus-onset ERP), which in the context of face stimuli might be associated with processing of dynamic facial expressions in the superior temporal sulcus (Itier and Taylor, 2004; Sadeh et al., 2010; Dalrymple et al., 2011). However, interpreting the pattern of activation in terms of brain regions is complicated. For example, N170 generators potentially involve a larger network than the FFA alone, such as the superior temporal sulcus (Henson et al., 2003). Moreover, preview faces in our study were presented in the periphery as part of a bilateral pair of stimuli, which may have influenced the timing of the evoked responses.

Our experimental design was motivated by the logic of distinguishing between an increase in neural activity, due to attentional enhancement, and a reduction due to prediction, i.e., an expectation driven by eye movement preparation about which information will be present at the future receptive field location (Kok et al., 2012; Spaak et al., 2016; de Lange et al., 2018). Although the exact mechanisms that cause this reduction in neural activity remain a matter of debate, reduced neural response can be considered a hallmark of prediction. In contrast, spatial attention allocated to the saccade target before the movement (Hoffman and Subramaniam, 1995; Deubel and Schneider, 1996; Deubel, 2008; Zhao et al., 2012; van Koningsbruggen and Buonocore, 2013; Buonocore et al., 2017) should be associated with an enhancement of the P1 and N1 components of the fixation-related ERP (Eimer, 2000; Mohamed et al., 2009; Sreenivasan et al., 2009; Churches et al., 2010; Meyberg et al., 2015). Although attention is deployed to the saccade target before the execution of the eye movement, our results clearly show a reduction, rather than enhancement, of the N170 component when a face preview was available.

The timing and topography of the effects observed here argue against a role for surprise, or a change in context, because such effects are more typically reflected in the later centroparietal P3 component (Sutton et al., 1965; Duncan-Johnson and Donchin, 1977; Donchin, 1981). Also, there was a face present after the saccade on every trial, so the face was never a “surprise” in that sense. In contrast, the relatively larger N1 in the scrambled-face preview condition might share some features with the visual mismatch negativity (Stefanics et al., 2014; Kornrumpf et al., 2016). Specifically, one could argue that in the scrambled-face preview condition (i.e., the condition where the prediction is not matched) the fixation of the face might lead to a “mismatch” between prediction and stimulus compared with a reduced or absent mismatch negativity in the conditions with an intact-face preview.

Interestingly, the pattern of effects observed in our experiment also resembles another well known effect reported in the ERP literature, repetition suppression (Kovács et al., 2006; Maurer et al., 2008; Kloth et al., 2010; Amihai et al., 2011). A number of previous studies have found a reduction of the N170 when a face is preceded by another face stimulus in foveal vision (for review, see Schweinberger and Neumann, 2016). It is important to then relate these foveal findings to the results of our trans-saccadic viewing paradigm. One interpretation of our results, following the repetition suppression literature, is that a “face detector” mechanism (Schweinberger and Neumann, 2016) would activate whenever a face appears in the periphery. In both the conditions with a congruent or incongruent preview, the face detector would then be suppressed when another face is foveated post-saccadically, leading to an attenuated N170.

Along these lines, previous studies using fMRI have shown a reduction in BOLD response when a stimulus was stable (i.e., its features and screen position remained constant) across a saccade (Dunkley et al., 2016; Zimmermann et al., 2016; Fairhall et al., 2017). In these studies, following a relatively long adaptation interval (e.g., 2 s), a saccade moved the stimulus from one hemifield (and, therefore, brain hemisphere) to the other. The results have been interpreted as a transfer of spatiotopic adaptation between retinotopic visual areas that is driven by an active mechanism linked to saccade execution (Zimmermann et al., 2016). The lack of a difference after incongruent- and congruent face previews would be consistent with these findings.

Our results add to this literature by demonstrating that the ERP effect can be measured across a saccade and with both faces at different retinotopic locations. They also add to the fMRI findings by showing that the effect develops rapidly, i.e., within 200 ms of the new fixation, after a comparatively brief preview interval. Note, however, that a lack of difference between the incongruent and congruent face preview might also be expected under a prediction mechanism. In our design, participants were likely unable to extract detailed local features from the preview faces appearing at 8° eccentricity. This is also supported by our finding that behavioral performance in the gender classification task did not improve if the gender of the preview face was correct (see Results, Preview effects: behavioral results). If the predictions about the post-saccadic input were only based on coarse configural information, both preview conditions would also not be expected to differ under a prediction view.

In practice, distinguishing whether the reduction in neural signal is due to repetition suppression or prediction might be difficult. There is an open debate whether repetition suppression effects are a signature of a prediction error (Rostalski et al., 2019), or whether they are separable processes (Tang et al., 2018). Some support for trans-saccadic prediction is reported in a recent study showing both a congruency benefit in behavior and reduced evoked responses when the orientation of a face, rather than just the presence of a face, was maintained across the saccade (Huber-Huber et al., 2019). Additional work is then necessary to directly test which of the different effects reported in our current study are driven by prediction and which can be explained by a mere trans-saccadic repetition of a face stimulus.

More generally, we demonstrate that fixation-related ERPs elicited by faces show a generally similar N170 component to that traditionally observed for a sudden stimulus onset (Soto et al., 2018). Given that face perception in real life is typically for a face brought into the fovea from the periphery, rather than a face appearing out of nowhere, it is important to show the ecological validity of such category-specific components. Overall, the current study provides a proof-of-concept for the usefulness of the fERP paradigm for studying visual stability. Follow-up studies could use this technique to investigate the mechanisms underlying trans-saccadic perception to distinguish between competing theories of how our impression of the visual world remains stable and continuous.

Footnotes

  • This work was supported by a Grant from the National Institute of Mental Health (R21MH117787) to D.M.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Antimo Buonocore at antimo.buonocore{at}cin.uni-tuebingen.de

References

  1. ↵
    1. Amihai I,
    2. Deouell LY,
    3. Bentin S
    (2011) Neural adaptation is related to face repetition irrespective of identity: a reappraisal of the N170 effect. Exp Brain Res 209:193–204. doi:10.1007/s00221-011-2546-x pmid:21287156
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bentin S,
    2. Allison T,
    3. Puce A,
    4. Perez E,
    5. McCarthy G
    (1996) Electrophysiological studies of face perception in humans. J Cogn Neurosci 8:551–565. doi:10.1162/jocn.1996.8.6.551 pmid:20740065
    OpenUrlCrossRefPubMed
  3. ↵
    1. Brainard DH
    (1997) The psychophysics toolbox. Spat Vis 10:433–436. doi:10.1163/156856897X00357 pmid:9176952
    OpenUrlCrossRefPubMed
  4. ↵
    1. Buonocore A,
    2. Fracasso A,
    3. Melcher D
    (2017) Pre-saccadic perception: separate time courses for enhancement and spatial pooling at the saccade target. PLoS One 12:e0178902–0178923. doi:10.1371/journal.pone.0178902 pmid:28614367
    OpenUrlCrossRefPubMed
  5. ↵
    1. Churches O,
    2. Wheelwright S,
    3. Baron-Cohen S,
    4. Ring H
    (2010) The N170 is not modulated by attention in autism spectrum conditions. Neuroreport 21:399–403. doi:10.1097/WNR.0b013e328334311b pmid:20848730
    OpenUrlCrossRefPubMed
  6. ↵
    1. Clark A
    (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav Brain Sci 36:181–204. doi:10.1017/S0140525X12000477 pmid:23663408
    OpenUrlCrossRefPubMed
  7. ↵
    1. Cornelissen FW,
    2. Peters EM,
    3. Palmer J
    (2002) The EyeLink toolbox: eye tracking with MATLAB and the psychophysics toolbox. Behav Res Methods Instrum Comput 34:613–617. doi:10.3758/BF03195489 pmid:12564564
    OpenUrlCrossRefPubMed
  8. ↵
    1. Dalrymple KA,
    2. Oruç I,
    3. Duchaine B,
    4. Pancaroglu R,
    5. Fox CJ,
    6. Iaria G,
    7. Handy TC,
    8. Barton JJ
    (2011) The anatomic basis of the right face-selective N170 IN acquired prosopagnosia: a combined ERP/fMRI study. Neuropsychologia 49:2553–2563. doi:10.1016/j.neuropsychologia.2011.05.003 pmid:21601585
    OpenUrlCrossRefPubMed
  9. ↵
    1. Dandekar S,
    2. Privitera C,
    3. Carney T,
    4. Klein SA
    (2012) Neural saccadic response estimation during natural viewing. J Neurophysiol 107:1776–1790. doi:10.1152/jn.00237.2011 pmid:22170971
    OpenUrlCrossRefPubMed
  10. ↵
    1. Deffke I,
    2. Sander T,
    3. Heidenreich J,
    4. Sommer W,
    5. Curio G,
    6. Trahms L,
    7. Lueschow A
    (2007) MEG/EEG sources of the 170-ms response to faces are co-localized in the fusiform gyrus. Neuroimage 35:1495–1501. doi:10.1016/j.neuroimage.2007.01.034 pmid:17363282
    OpenUrlCrossRefPubMed
  11. ↵
    1. de Lange FP,
    2. Heilbron M,
    3. Kok P
    (2018) How do expectations shape perception? Trends Cogn Sci 22:764–779. doi:10.1016/j.tics.2018.06.002 pmid:30122170
    OpenUrlCrossRefPubMed
  12. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21. doi:10.1016/j.jneumeth.2003.10.009 pmid:15102499
    OpenUrlCrossRefPubMed
  13. ↵
    1. Deubel H
    (2008) The time course of presaccadic attention shifts. Psychol Res 72:630–640. doi:10.1007/s00426-008-0165-3 pmid:18839208
    OpenUrlCrossRefPubMed
  14. ↵
    1. Deubel H,
    2. Schneider WX
    (1996) Saccade target selection and object recognition: evidence for a common attentional mechanism. Vis Res 36:1827–1837. doi:10.1016/0042-6989(95)00294-4 pmid:8759451
    OpenUrlCrossRefPubMed
  15. ↵
    1. Dimigen O
    (2020) Optimizing the ICA-based removal of ocular EEG artifacts from free viewing experiments. Neuroimage 207:116117. doi:10.1016/j.neuroimage.2019.116117 pmid:31689537
    OpenUrlCrossRefPubMed
  16. ↵
    1. Dimigen O,
    2. Sommer W,
    3. Hohlfeld A,
    4. Jacobs AM,
    5. Kliegl R
    (2011) Coregistration of eye movements and EEG in natural reading: analyses and review. J Exp Psychol Gen 140:552–572. doi:10.1037/a0023885 pmid:21744985
    OpenUrlCrossRefPubMed
  17. ↵
    1. Dimigen O,
    2. Kliegl R,
    3. Sommer W
    (2012) Trans-saccadic parafoveal preview benefits in fluent reading: a study with fixation-related brain potentials. Neuroimage 62:381–393. doi:10.1016/j.neuroimage.2012.04.006 pmid:22521255
    OpenUrlCrossRefPubMed
  18. ↵
    1. Donchin E
    (1981) Surprise!… surprise? Psychophysiology 18:493–513. doi:10.1111/j.1469-8986.1981.tb01815.x pmid:7280146
    OpenUrlCrossRefPubMed
  19. ↵
    1. Duhamel JR,
    2. Colby CL,
    3. Goldberg ME
    (1992) The updating of the representation of visual space in parietal cortex by intended eye movements. Science 255:90–92. doi:10.1126/science.1553535 pmid:1553535
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Duncan-Johnson CC,
    2. Donchin E
    (1977) On quantifying surprise: the variation of event-related potentials with subjective probability. Psychophysiology 14:456–467. doi:10.1111/j.1469-8986.1977.tb01312.x pmid:905483
    OpenUrlCrossRefPubMed
  21. ↵
    1. Dunkley BT,
    2. Baltaretu B,
    3. Crawford JD
    (2016) Trans-saccadic interactions in human parietal and occipital cortex during the retention and comparison of object orientation. Cortex 82:263–276. doi:10.1016/j.cortex.2016.06.012 pmid:27424061
    OpenUrlCrossRefPubMed
  22. ↵
    1. Edwards G,
    2. VanRullen R,
    3. Cavanagh P
    (2018) Decoding trans-saccadic memory. J Neurosci 38:1114–1123.
    OpenUrlAbstract/FREE Full Text
  23. ↵
    1. Ehinger BV,
    2. Dimigen O
    (2019) Unfold: an integrated toolbox for overlap correction, non-linear modeling, and regression-based EEG analysis. PeerJ 7:e7838. doi:10.7717/peerj.7838 pmid:31660265
    OpenUrlCrossRefPubMed
  24. ↵
    1. Ehinger BV,
    2. König P,
    3. Ossandón JP
    (2015) Predictions of visual content across eye movements and their modulation by inferred information. J Neurosci, 35:7403–7413.
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Eimer M
    (2000) Attentional modulations of event-related brain potentials sensitive to faces. Cogn Neuropsychol 17:103–116. doi:10.1080/026432900380517 pmid:20945174
    OpenUrlCrossRefPubMed
  26. ↵
    1. Eimer M
    (2011) The face-sensitive N170 component of the event-related brain potential. The Oxford handbook of face perception, pp 329–344. Oxford: Oxford UP.
  27. ↵
    1. Fairhall SL,
    2. Schwarzbach J,
    3. Lingnau A,
    4. Van Koningsbruggen MG,
    5. Melcher D
    (2017) Spatiotopic updating across saccades revealed by spatially-specific fMRI adaptation. Neuroimage 147:339–345. doi:10.1016/j.neuroimage.2016.11.071 pmid:27913216
    OpenUrlCrossRefPubMed
  28. ↵
    1. Gao X,
    2. Wilson HR
    (2013) The neural representation of face space dimensions. Neuropsychologia 51:1787–1793. doi:10.1016/j.neuropsychologia.2013.07.001 pmid:23850598
    OpenUrlCrossRefPubMed
  29. ↵
    1. Halgren E,
    2. Raij T,
    3. Marinkovic K,
    4. Jousmäki V,
    5. Hari R
    (2000) Cognitive response profile of the human fusiform face area as determined by MEG. Cereb Cortex 10:69–81. doi:10.1093/cercor/10.1.69 pmid:10639397
    OpenUrlCrossRefPubMed
  30. ↵
    1. Henson RN,
    2. Goshen-Gottstein Y,
    3. Ganel T,
    4. Otten LJ,
    5. Quayle A,
    6. Rugg MD
    (2003) Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cereb Cortex 13:793–805. doi:10.1093/cercor/13.7.793 pmid:12816895
    OpenUrlCrossRefPubMed
  31. ↵
    1. Hoffman JE,
    2. Subramaniam B
    (1995) The role of visual attention in saccadic eye movements. Percept Psychophys 57:787–795. doi:10.3758/BF03206794 pmid:7651803
    OpenUrlCrossRefPubMed
  32. ↵
    1. Hoshiyama M,
    2. Kakigi R,
    3. Watanabe S,
    4. Miki K,
    5. Takeshima Y
    (2003) Brain responses for the subconscious recognition of faces. Neurosci Res 46:435–442. doi:10.1016/S0168-0102(03)00121-4 pmid:12871765
    OpenUrlCrossRefPubMed
  33. ↵
    1. Huber-Huber C,
    2. Buonocore A,
    3. Dimigen O,
    4. Hickey C,
    5. Melcher D
    (2019) The peripheral preview effect with faces: combined EEG and eye-tracking suggests multiple stages of trans-saccadic predictive and non-predictive processing. Neuroimage 200:344–362. doi:10.1016/j.neuroimage.2019.06.059 pmid:31260837
    OpenUrlCrossRefPubMed
  34. ↵
    1. Itier RJ,
    2. Taylor MJ
    (2004) Source analysis of the N170 to faces and objects. Neuroreport 15:1261–1265. doi:10.1097/01.wnr.0000127827.73576.d8 pmid:15167545
    OpenUrlCrossRefPubMed
  35. ↵
    1. Keren AS,
    2. Yuval-Greenberg S,
    3. Deouell LY
    (2010) Saccadic spike potentials in gamma-band EEG: characterization, detection and suppression. Neuroimage 49:2248–2263.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Kloth N,
    2. Schweinberger SR,
    3. Kovács G
    (2010) Neural correlates of generic versus gender-specific face adaptation. J Cogn Neurosci 22:2345–2356. doi:10.1162/jocn.2009.21329 pmid:19702459
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kok P,
    2. Rahnev D,
    3. Jehee JF,
    4. Lau HC,
    5. de Lange FP
    (2012) Attention reverses the effect of prediction in silencing sensory signals. Cereb Cortex 22:2197–2206. doi:10.1093/cercor/bhr310 pmid:22047964
    OpenUrlCrossRefPubMed
  38. ↵
    1. Kornrumpf B,
    2. Niefind F,
    3. Sommer W,
    4. Dimigen O
    (2016) Neural correlates of word recognition: a systematic comparison of natural reading and rapid serial visual presentation. J Cogn Neurosci 28:1374–1391. doi:10.1162/jocn_a_00977 pmid:27167402
    OpenUrlCrossRefPubMed
  39. ↵
    1. Kovács G,
    2. Zimmer M,
    3. Bankó É,
    4. Harza I,
    5. Antal A,
    6. Vidnyánszky Z
    (2006) Electrophysiological correlates of visual adaptation to faces and body parts in humans. Cereb Cortex 16:742–753. doi:10.1093/cercor/bhj020 pmid:16120795
    OpenUrlCrossRefPubMed
  40. ↵
    1. Maris E,
    2. Oostenveld R
    (2007) Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods 164:177–190. doi:10.1016/j.jneumeth.2007.03.024 pmid:17517438
    OpenUrlCrossRefPubMed
  41. ↵
    1. Mathôt S,
    2. Theeuwes J
    (2011) Visual attention and stability. Philos Trans R Soc Lond B Biol Sci 366:516–527. doi:10.1098/rstb.2010.0187 pmid:21242140
    OpenUrlCrossRefPubMed
  42. ↵
    1. Maurer U,
    2. Rossion B,
    3. McCandliss BD
    (2008) Category specificity in early perception: face and word N170 responses differ in both lateralization and habituation properties. Front Hum Neurosci 2:18. doi:10.3389/neuro.09.018.2008 pmid:19129939
    OpenUrlCrossRefPubMed
  43. ↵
    1. Melcher D
    (2005) Spatiotopic transfer of visual-form adaptation across saccadic eye movements. Curr Biol 15:1745–1748. doi:10.1016/j.cub.2005.08.044 pmid:16213821
    OpenUrlCrossRefPubMed
  44. ↵
    1. Melcher D
    (2011) Visual stability. Philos Trans R Soc Lond B Biol Sci 366:468–475. doi:10.1098/rstb.2010.0277 pmid:21242136
    OpenUrlCrossRefPubMed
  45. ↵
    1. Melcher D,
    2. Colby CL
    (2008) Trans-saccadic perception. Trends Cogn Sci 12:466–473. doi:10.1016/j.tics.2008.09.003 pmid:18951831
    OpenUrlCrossRefPubMed
  46. ↵
    1. Melcher D,
    2. Morrone MC
    (2015) Nonretinotopic visual processing in the brain. Vis Neurosci 32:E017. doi:10.1017/S095252381500019X pmid:26423219
    OpenUrlCrossRefPubMed
  47. ↵
    1. Mensen A,
    2. Khatami R
    (2013) Advanced EEG analysis using threshold-free cluster-enhancement and non-parametric statistics. Neuroimage 67:111–118. doi:10.1016/j.neuroimage.2012.10.027 pmid:23123297
    OpenUrlCrossRefPubMed
  48. ↵
    1. Meyberg S,
    2. Werkle-Bergner M,
    3. Sommer W,
    4. Dimigen O
    (2015) Microsaccade-related brain potentials signal the focus of visuospatial attention. Neuroimage 104:79–88. doi:10.1016/j.neuroimage.2014.09.065 pmid:25285375
    OpenUrlCrossRefPubMed
  49. ↵
    1. Mohamed TN,
    2. Neumann MF,
    3. Schweinberger SR
    (2009) Perceptual load manipulation reveals sensitivity of the face-selective N170 to attention. Neuroreport 20:782–787. doi:10.1097/WNR.0b013e32832b7e24 pmid:19369907
    OpenUrlCrossRefPubMed
  50. ↵
    1. Pelli DG
    (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10:437–442. doi:10.1163/156856897X00366 pmid:9176953
    OpenUrlCrossRefPubMed
  51. ↵
    1. Pitcher D,
    2. Walsh V,
    3. Yovel G,
    4. Duchaine B
    (2007) TMS evidence for the involvement of the right occipital face area in early face processing. Curr Biol 17:1568–1573. doi:10.1016/j.cub.2007.07.063 pmid:17764942
    OpenUrlCrossRefPubMed
  52. ↵
    1. Plöchl M,
    2. Ossandón JP,
    3. König P
    (2012) Combining EEG and eye tracking: identification, characterization, and correction of eye movement artifacts in electroencephalographic data. Front Hum Neurosci 6:278. doi:10.3389/fnhum.2012.00278 pmid:23087632
    OpenUrlCrossRefPubMed
  53. ↵
    1. Rao RP,
    2. Ballard DH
    (1999) Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci 2:79–87. doi:10.1038/4580 pmid:10195184
    OpenUrlCrossRefPubMed
  54. ↵
    1. Rayner K
    (1975) The perceptual span and peripheral cues in reading. Cogn Psychol 7:65–81. doi:10.1016/0010-0285(75)90005-5
    OpenUrlCrossRef
  55. ↵
    1. Rostalski SM,
    2. Amado C,
    3. Kovács G
    (2019) Repetition suppression for noisy and intact faces in the occipito-temporal cortex. Front Psychol 10:1348. doi:10.3389/fpsyg.2019.01348 pmid:31258501
    OpenUrlCrossRefPubMed
  56. ↵
    1. Rousselet GA,
    2. Gaspar CM,
    3. Wieczorek KP,
    4. Pernet CR
    (2011) Modeling single-trial ERP reveals modulation of bottom-up face visual processing by top-down task constraints (in some subjects). Front Psychol 2:137. doi:10.3389/fpsyg.2011.00137 pmid:21886627
    OpenUrlCrossRefPubMed
  57. ↵
    1. Sadeh B,
    2. Podlipsky I,
    3. Zhdanov A,
    4. Yovel G
    (2010) Event-related potential and functional MRI measures of face-selectivity are highly correlated: a simultaneous ERP-fMRI investigation. Hum Brain Mapp 31:1490–1501. doi:10.1002/hbm.20952 pmid:20127870
    OpenUrlCrossRefPubMed
  58. ↵
    1. Schweinberger SR,
    2. Neumann MF
    (2016) Repetition effects in human ERPs to faces. Cortex 80:141–153. doi:10.1016/j.cortex.2015.11.001 pmid:26672902
    OpenUrlCrossRefPubMed
  59. ↵
    1. Smith NJ,
    2. Kutas M
    (2015a) Regression-based estimation of ERP waveforms: I. The rERP framework. Psychophysiology 52:157–168. doi:10.1111/psyp.12317 pmid:25141770
    OpenUrlCrossRefPubMed
  60. ↵
    1. Smith NJ,
    2. Kutas M
    (2015b) Regression-based estimation of ERP waveforms: II. Nonlinear effects, overlap correction, and practical considerations. Psychophysiology 52:169–181. doi:10.1111/psyp.12320 pmid:25195691
    OpenUrlCrossRefPubMed
  61. ↵
    1. Smith SM,
    2. Nichols TE
    (2009) Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. Neuroimage 44:83–98. doi:10.1016/j.neuroimage.2008.03.061 pmid:18501637
    OpenUrlCrossRefPubMed
  62. ↵
    1. Soto V,
    2. Tyson-Carr J,
    3. Kokmotou K,
    4. Roberts H,
    5. Cook S,
    6. Fallon N,
    7. Giesbrecht T,
    8. Stancak A
    (2018) Brain responses to emotional faces in natural settings: a wireless mobile EEG recording study. Front Psychol 9:2003. doi:10.3389/fpsyg.2018.02003 pmid:30410458
    OpenUrlCrossRefPubMed
  63. ↵
    1. Spaak E,
    2. Fonken Y,
    3. Jensen O,
    4. de Lange FP
    (2016) The neural mechanisms of prediction in visual search. Cereb Cortex 26:4327–4336. doi:10.1093/cercor/bhv210 pmid:26400919
    OpenUrlCrossRefPubMed
  64. ↵
    1. Sreenivasan KK,
    2. Goldstein JM,
    3. Lustig AG,
    4. Rivas LR,
    5. Jha AP
    (2009) Attention to faces modulates early face processing during low but not high face discriminability. Atten Percept Psychophys 71:837–846. doi:10.3758/APP.71.4.837 pmid:19429962
    OpenUrlCrossRefPubMed
  65. ↵
    1. Srinivasan MV,
    2. Laughlin SB,
    3. Dubs A
    (1982) Predictive coding: a fresh view of inhibition in the retina. Proc R Soc Lond B Biol Sci 216:427–459. doi:10.1098/rspb.1982.0085 pmid:6129637
    OpenUrlCrossRefPubMed
  66. ↵
    1. Stefanics G,
    2. Kremláček J,
    3. Czigler I
    (2014) Visual mismatch negativity: a predictive coding view. Front Hum Neurosci 8:666. doi:10.3389/fnhum.2014.00666 pmid:25278859
    OpenUrlCrossRefPubMed
  67. ↵
    1. Sutton S,
    2. Braren M,
    3. Zubin J,
    4. John ER
    (1965) Evoked-potential correlates of stimulus uncertainty. Science 150:1187–1188. doi:10.1126/science.150.3700.1187 pmid:5852977
    OpenUrlAbstract/FREE Full Text
  68. ↵
    1. Tang MF,
    2. Smout CA,
    3. Arabzadeh E,
    4. Mattingley JB
    (2018) Prediction error and repetition suppression have distinct effects on neural representations of visual information. eLife 7:e33123. doi:10.7554/eLife.33123 pmid:30547881
    OpenUrlCrossRefPubMed
  69. ↵
    1. Thickbroom GW,
    2. Knezevič W,
    3. Carroll WM,
    4. Mastaglia FL
    (1991) Saccade onset and offset lambda waves: relation to pattern movement visually evoked potentials. Brain Res 551:150–156. doi:10.1016/0006-8993(91)90927-N pmid:1913148
    OpenUrlCrossRefPubMed
  70. ↵
    1. Thiele A,
    2. Bellgrove MA
    (2018) Neuromodulation of attention. Neuron 97:769–785. doi:10.1016/j.neuron.2018.01.008 pmid:29470969
    OpenUrlCrossRefPubMed
  71. ↵
    1. van Koningsbruggen MG,
    2. Buonocore A
    (2013) Mechanisms behind perisaccadic increase of perception. J Neurosci 33:11327–11328. doi:10.1523/JNEUROSCI.1567-13.2013 pmid:23843506
    OpenUrlFREE Full Text
  72. ↵
    1. Winkler I,
    2. Debener S,
    3. Müller KR,
    4. Tangermann M
    (2015) On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP. 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 4101–4105. IEEE.
  73. ↵
    1. Wolfe BA,
    2. Whitney D
    (2014) Facilitating recognition of crowded faces with presaccadic attention. Front Hum Neurosci 8:103. doi:10.3389/fnhum.2014.00103 pmid:24592233
    OpenUrlCrossRefPubMed
  74. ↵
    1. Zhao M,
    2. Gersch TM,
    3. Schnitzer BS,
    4. Dosher BA,
    5. Kowler E
    (2012) Eye movements and attention: the role of pre-saccadic shifts of attention in perception, memory and the control of saccades. Vision Res 74:40–60. doi:10.1016/j.visres.2012.06.017 pmid:22809798
    OpenUrlCrossRefPubMed
  75. ↵
    1. Zimmermann E,
    2. Weidner R,
    3. Abdollahi RO,
    4. Fink GR
    (2016) Spatiotopic adaptation in visual areas. J Neurosci 36:9526–9534. doi:10.1523/JNEUROSCI.0052-16.2016 pmid:27629705
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 40 (11)
Journal of Neuroscience
Vol. 40, Issue 11
11 Mar 2020
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Post-Saccadic Face Processing Is Modulated by Pre-Saccadic Preview: Evidence from Fixation-Related Potentials
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Post-Saccadic Face Processing Is Modulated by Pre-Saccadic Preview: Evidence from Fixation-Related Potentials
Antimo Buonocore, Olaf Dimigen, David Melcher
Journal of Neuroscience 11 March 2020, 40 (11) 2305-2313; DOI: 10.1523/JNEUROSCI.0861-19.2020

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Post-Saccadic Face Processing Is Modulated by Pre-Saccadic Preview: Evidence from Fixation-Related Potentials
Antimo Buonocore, Olaf Dimigen, David Melcher
Journal of Neuroscience 11 March 2020, 40 (11) 2305-2313; DOI: 10.1523/JNEUROSCI.0861-19.2020
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • active prediction
  • eye movements
  • fixation related potentials
  • N170
  • visual stability

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Oxidative stress-induced damage to the developing hippocampus is mediated by GSK3beta
  • Disruption of endosomal sorting in Schwann cells leads to defective myelination and endosomal abnormalities observed in Charcot-Marie-Tooth disease
  • Depolarizing NaV and hyperpolarizing KV channels are co-trafficked in sensory neurons
Show more Research Articles

Behavioral/Cognitive

  • Spontaneous Alpha-Band Oscillations Bias Subjective Contrast Perception
  • The role of visual experience in individual differences of brain connectivity
  • A Neurodevelopmental Shift in Reward Circuitry from Mother's to Nonfamilial Voices in Adolescence
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2022 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.