How Sleep Transforms Our Memories
Elizabeth Siefert, Sindhuja Uppuluri, Jianing Mu, Marlie Tandoc, James Antony, and Anna Schapiro
(see article e0022242024)
Memory consolidation is a process that stabilizes and transforms memories for long-term storage. Memory consolidation is thought to be supported by reactivation of individual memories when we sleep, but memories can be contextually complex and typically not all components of memories are remembered as strongly over time. In this issue, Siefert et al. investigated the role of memory reactivation during sleep in shaping different components of a memory. Male and female participants first learned novel object categories in which objects had both distinct and overlapping features. Next, a real-time EEG protocol was used to cue reactivation of the objects by playing their names during moments of sleep that are considered ideal for memory reactivation, which is known as the targeted memory reactivation technique. Memory reactivation improved how well distinct traits of objects were remembered, yet worsened memory of overlapping features. Thus, memory reactivation during sleep does not act indiscriminately over all features of object memories, rather it differentiates and modulates distinct components of them, driving memory transformation. These findings are a breakthrough in our understanding of how sleep shapes and transforms memory.
A macaque. See Kim and Pasupathy for more information about how this animal model was used to investigate visual crowding. Image: Flickr/Alix Lee, licensed under Public Domain.
Investigating the Neurophysiology of Visual Crowding
Taekjun Kim and Anitha Pasupathy
(see article e2260232024)
Imagine looking for your friend in a crowd or searching for a pen in a drawer filled to the brim with other utensils and tools. Distracting visual stimuli in crowded environments make it difficult to identify objects of interest. This frequent phenomenon has been well described, but the underlying neural mechanisms of visual crowding remain unclear. Kim and Pasupathy tested the hypothesis that visual crowding is driven by pooled encoding of features from both target and distracting stimuli, which results in a loss of information about target objects. They recorded from a large number of neurons in the visual cortex (more specifically, V4) of two monkeys and discovered that stimulus features that influence crowding, such as the number, distance, and position of distracting stimuli, significantly altered V4 activity. Notably, enhancing the salience of target objects strengthened target selectivity of neurons and diminished the distractibility of crowded environments. While these findings support the idea that aspects of crowded environments negatively influence target object detection, the finding that target object salience diminishes the strength of crowding is not quite consistent with the pooled encoding hypothesis. This study is important because it not only bridges the gap between psychophysical and neurophysiological crowding effects, but it also advances our understanding of how multiple simultaneously presented stimuli may be encoded in the cortex.
Footnotes
This Week in The Journal was written by Paige McKeon