Modeling Place Cell Remapping and Grid Realignment
César Rennó-Costa and Adriano B.L. Tort
(see pages 8062–8076)
As a rodent moves through a familiar environment, its position is encoded by neurons in the hippocampus and entorhinal cortex. Hippocampal place cells fire when the animal is at a specific location, while entorhinal grid cells fire at multiple locations evenly spaced throughout the environment. Together, these cells create a cognitive map. Each time an animal returns to a particular location, grid and place cell ensembles representing that location show the same firing pattern, even if small environmental changes have occurred. This allows animals to recognize a location despite minor variations. But moderate environmental changes, particularly in nonspatial features, affect place-cell firing rates (rate remapping), and large environmental changes (e.g., moving to a new arena) cause global remapping of place fields and realignment of grid fields.
Place cells and grid cells are connected in loops and influence each other's activity. To understand how these cells interact in changing environments, Rennó-Costa and Tort created a computational model of place cells and grid cells that received input from sensory cells and were connected via a feedback loop. All connections could undergo Hebbian synaptic plasticity. This model was used to simulate experiments in which animals explored different environments, and it recapitulated many experimental findings. Most notably, activity was unaffected by small changes in sensory input, whereas large changes caused global remapping of place cells and grid realignment. Furthermore, when one environment was gradually morphed into another, firing patterns of both cell types abruptly switched at an intermediate stage. The model also demonstrated that place cells did not require input from grid cells to undergo global remapping, but grid cell input improved the stability of place cell representations when sensory input was noisy or inconsistent.
Overall, the results support the hypothesis that place cell remapping induces grid cell realignment, rather than vice versa. They are consistent with evidence that place fields in young rodents become more stable after grid cell properties develop. Finally, they demonstrate that synaptic plasticity at non-grid-cell inputs to place cells and between place and grid cells is sufficient to generate appropriate firing patterns during map construction.
Statistical Power in Human Neuroscience Studies
Camilla L. Nord, Vincent Valton, John Wood, and Jonathan P. Roiser
(see pages 8051–8061)
Statistical power is the probability that a study will correctly reject the null hypothesis and thus detect a real effect. It depends primarily on effect and sample sizes: larger samples are required to detect smaller effects. Low power is a concern not only because it reduces the ability to detect an effect, but also because—when widespread—it reduces positive predictive value (the probably that statistically significant effects are true). In 2013, Button et al. (Nat Rev Neurosci 14:365) examined the power of all human neuroscience studies that were included in meta-analyses published in 2009 and concluded that the median power of these studies was disturbingly low (∼21%), suggesting that most findings reported in neuroscience are questionable.
Statistical power varies across human neuroscience studies included in meta-analyses. See Nord, Valton, et al. for details.
To discover whether statistical power is indeed low throughout human neuroscience, Nord, Valton, et al. examined the distribution of power in the 730 studies included in Button and colleagues' analysis. Using Gaussian mixture modeling they found that the studies fell into four subgroups ranging from low to high statistical power. They then asked whether power distributions differed across subfields (gene association studies, psychology, neuroimaging, treatment trials, neurochemistry, and miscellaneous). Although low power was especially prevalent in gene association studies, many studies in each subfield were underpowered. Finally, because power estimates are low regardless of sample size if no effect actually exists, the authors asked whether studies of null effects greatly distorted the power distribution. Although studies included in meta-analyses that found no statistically significant effect often had power <10%, excluding these studies increased the median power of remaining studies only slightly, to 30%.
Based on these results, the authors conclude that statistical power is indeed low in many neuroscience studies, particularly in gene association studies. It must be emphasized, however, that this conclusion is based on a biased sample. Because power estimation requires knowledge of the true effect size—which is estimated most accurately in meta-analyses—the authors only examined studies included in such analyses. But meta-analyses are typically performed when individual studies have small sample sizes (often unavoidable in patient studies) or have yielded conflicting results: in other words, studies that are already likely to have low statistical power. Moreover, because only studies using human subjects were included, conclusions cannot be drawn about power in the entire field of neuroscience.
Footnotes
This Week in The Journal was written by Teresa Esch, Ph.D.