Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Temporal Dynamics of Competition between Statistical Learning and Episodic Memory in Intracranial Recordings of Human Visual Cortex

Brynn E. Sherman, Kathryn N. Graves, David M. Huberdeau, Imran H. Quraishi, Eyiyemisi C. Damisah and Nicholas B. Turk-Browne
Journal of Neuroscience 30 November 2022, 42 (48) 9053-9068; DOI: https://doi.org/10.1523/JNEUROSCI.0708-22.2022
Brynn E. Sherman
1Department of Psychology, Yale University, 2 Hillhouse Avenue, New Haven, CT 06520
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kathryn N. Graves
1Department of Psychology, Yale University, 2 Hillhouse Avenue, New Haven, CT 06520
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David M. Huberdeau
1Department of Psychology, Yale University, 2 Hillhouse Avenue, New Haven, CT 06520
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Imran H. Quraishi
2Department of Neurology, Yale University, 800 Howard Avenue, New Haven, CT 06519
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Imran H. Quraishi
Eyiyemisi C. Damisah
3Department of Neurosurgery, Yale University, 333 Cedar Street, New Haven, CT 06510
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Eyiyemisi C. Damisah
Nicholas B. Turk-Browne
1Department of Psychology, Yale University, 2 Hillhouse Avenue, New Haven, CT 06520
4Wu Tsai Institute, Yale University, 100 College Street, New Haven, CT 06510
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Nicholas B. Turk-Browne
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Task design. A, Example scene category pairings for 1 participant. Three of 12 categories were assigned to Condition A. Each A category was reliably followed by one of three other categories assigned to Condition B to create pairs. The remaining six categories assigned to Condition X were not paired. Participants viewed the A and B (Structured) and X (Random) categories in separate blocks of the task. B, Example stimuli from the Structured block. Participants passively viewed a continuous stream of scenes. Each scene was shown for 267 ms, followed by an ISI of 267 ms with only a fixation cross on the screen. The stream was segmented into subblocks. The same exemplar of each category was presented 4 times per subblock, and new exemplars were introduced for the next subblock. For the Structured block, the category pairs remained consistent across subblocks. Colored frame represents category pairs, corresponding to the A-B pairs (and colored arrows) in subpanel A.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Electrode coverage. The contact locations on the grid, strip, and/or depth electrodes for each participant are plotted as circles in standard brain space. Contacts colored in blue were localized to the visual cortex mask.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Category evidence analysis pipeline. A, A Morlet wavelet approach was used to extract time-frequency information from contacts in visual cortex. This resulted in contact by frequency vectors for every time point of encoding phase and memory phase trials, which served as the neural patterns for subsequent analysis steps. B, To identify the neural patterns that distinguished between categories, we ran a series of binary classifiers for every pair of categories from the memory phase trials. These classifiers were trained on the contact by frequency vectors for a single time point or set of time points. The classifiers were then tested on time points from held-out data. C, After a series of feature selection steps, we chose the per-participant top-N time point set that produced the best classification accuracy, and then averaged contact by frequency vectors across those time points (across all exemplars of a given category) to create a “template” of neural activity for each category. D, We then correlated the template for each category from the memory phase with the contact by frequency vector at each time point of each trial/exemplar from that category during the (independent) encoding phase, yielding a time course of pattern similarity reflecting neural category evidence.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Behavioral results. A, Overall memory performance collapsed across conditions. Circle represents A′ (a sensitivity measure for recognition memory) for each participant. All participants were above chance (0.5). B, Hit rate as a function of condition (A: predictive; B: predictable; X: control). Bars represent group means. Errors bars indicate bootstrapped 95% CI across participants. Individual participant data are overlaid with the gray circles and lines.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Neural frequency tagging analysis. A, Schematic of analysis and hypothesized neural oscillations. We expect entrainment of visual contacts at the frequency of images in both blocks. In the Structured block, we also expect entrainment at the frequency of category pairs. B, These hypotheses were confirmed, with reliable peaks in coherence at the image and pair frequencies in Structured blocks but only at the image frequency in Random blocks. C, We examined the emergence of entrainment over time by measuring the difference in coherence at the frequency of interest, relative to the two neighboring frequencies, as we iteratively increased the number of subblocks from the start of the run included in the analysis. Left, Coherence at the pair frequency emerged over time in the Structured block (reaching significance by the 13th subblock and beyond) but not in the Random block. Right, Coherence at the image frequency was high in both blocks, regardless of how many subblocks were included. Error bands indicate the 95% bootstrapped CIs across participants.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Category decoding and feature selection. A, To establish overall category decoding accuracy, we trained and tested binary category classifiers separately for all individual time points, yielding a temporal generalization matrix. B, As a first feature-selection step, we computed the average classification accuracy (across pairwise classifiers) for each training time point and participant (colored lines). We then ranked the time points by classification accuracy. C, To select the set of time points that produced the best classification for a given participant, we trained and tested the category classifiers on an increasing number of time points, starting with the best-performing time point identified in B and iteratively adding time points by rank. We then computed the per-participant average classification accuracy for each set of time points. D, Histogram depicting which training time points were selected for template creation for all participants (e.g., count = 3 indicates that that time point was included for 3 of the 7 participants).

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Neural category evidence. A, Time course of similarity between patterns of neural activity in visual contacts evoked by exemplars from A (predictive), B (predictable), and X (control) categories and category template patterns for A, B, and X, respectively, baselined to average evidence for the other categories of the same condition. Inset, Raw pattern similarity before baseline subtraction for the category template of interest (dark) and the average of the other category templates from the same condition (light). Error bands were removed for ease of visualization. Current, the trial when the item was presented; Pre, the trial before the item was presented; Post, the trial after the item was presented. For each row/condition, the Pre, Current, and Post trials are compared with the same category template (Current). Error bands represent the bootstrapped 95% CIs across participants (i.e., any time point whose band excludes 0, p < 0.05). B, Average pattern similarity collapsed across time points within ON (stimulus on screen) and ISI (fixation between stimuli) epochs. Each dot represents an individual participant. Bars represent the means across participants. Error bars indicate the bootstrapped 95% CIs. *p < 0.05. **p < 0.01. ***p < 0.001.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Subsequent memory analysis. A, Left, Time course of pattern similarity in visual contacts between the encoding of A items and the category templates for A (Perceived, A during A) and B (Predicted, B during A), as a function of whether the A items were subsequently remembered or forgotten. Right, Pattern similarity averaged within the ISI period, the epoch in which we observed overall evidence of prediction, as a function of subsequent memory for A items (filled bars represent remembered; empty bars represent forgotten). B, Left, Time course of pattern similarity in visual contacts between the category template for B and the encoding of A items (Predicted, B during A) and B items (Perceived, B during B), as a function of whether the B items were subsequently remembered or forgotten. Right, Pattern similarity averaged within the ISI period, as a function of subsequent memory for B items. Error shading/bars represent the bootstrapped 95% CI across participants. Each dot represents an individual participant. *p < 0.05. **p < 0.01. ***p < 0.001.

Tables

  • Figures
    • View popup
    Table 1.

    Patient informationa

    IDAge (yr)SexnElec (vis)ImplantData collectedNotes
    119F203 (21)R G/S/D2S, 2RR2 mem data not usable (D)
    226F163 (59)L G/S/D2S, 2R—
    343F172 (10)Bi D1S, 2R—
    461F136 (0)Bi D1S, 1Rneural mem data not usable (T)
    531M152 (31)L G/S/D2S, 2RR1 encoding data not usable (T)
    669F92 (7)L D2S, 2R—
    733M232 (22)Bi D1S, 1R—
    831F192 (20)Bi D2S, 2Rno mem data collected (C)
    956F192 (36)Bi D2S, 2RR1 encoding data not usable (T)
    1053M148 (0)Bi D2S, 2R—
    • ↵aDescription of patient participation. nElec (vis), the total number of electrode contacts, followed by the number of visual electrode contacts. Implant: R, right-sided implant; L, left-sided implant; Bi, bilateral implant; G, grid; S, strip; D, depth. Data collected: the number of runs for each condition collected (S, Structured; R, Random). Notes: which runs (if any) were excluded from given analyses and why: D, patient distraction (e.g., a clinician coming in and disrupting testing); T, trigger issue (i.e., an error with the equipment such that we could not align individual trials to our neural signal); C, computer error (e.g., the computer crashed).

Back to top

In this issue

The Journal of Neuroscience: 42 (48)
Journal of Neuroscience
Vol. 42, Issue 48
30 Nov 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Temporal Dynamics of Competition between Statistical Learning and Episodic Memory in Intracranial Recordings of Human Visual Cortex
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Temporal Dynamics of Competition between Statistical Learning and Episodic Memory in Intracranial Recordings of Human Visual Cortex
Brynn E. Sherman, Kathryn N. Graves, David M. Huberdeau, Imran H. Quraishi, Eyiyemisi C. Damisah, Nicholas B. Turk-Browne
Journal of Neuroscience 30 November 2022, 42 (48) 9053-9068; DOI: 10.1523/JNEUROSCI.0708-22.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Temporal Dynamics of Competition between Statistical Learning and Episodic Memory in Intracranial Recordings of Human Visual Cortex
Brynn E. Sherman, Kathryn N. Graves, David M. Huberdeau, Imran H. Quraishi, Eyiyemisi C. Damisah, Nicholas B. Turk-Browne
Journal of Neuroscience 30 November 2022, 42 (48) 9053-9068; DOI: 10.1523/JNEUROSCI.0708-22.2022
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • iEEG
  • learning
  • memory
  • prediction

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Dual regulation of spine-specific and synapse-to-nucleus signaling by PKCδ during plasticity
  • Cholinergic stimulation modulates the functional composition of CA3 cell types in the hippocampus
  • Behavioral state-dependent modulation of prefrontal cortex activity by respiration
Show more Research Articles

Behavioral/Cognitive

  • Learning a Model of Shape Selectivity in V4 Cells Reveals Shape Encoding Mechanisms in the Brain
  • A Fluid Self-Concept: How the Brain Maintains Coherence and Positivity across an Interconnected Self-Concept While Incorporating Social Feedback
  • A Texture Statistics Encoding Model Reveals Hierarchical Feature Selectivity across Human Visual Cortex
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.