Local circuits in different cortical areas and animal species share ubiquitous features of synaptic connectivity and neural spiking activity. For example, neurons in local cortical circuit are more likely to be bidirectionally connected and to exhibit particular connectivity patterns among three or more neurons than would be expected if synapses were made by chance (Song et al., 2005). Individual neurons also fire irregularly, with random time intervals between spikes (Softky and Koch, 1993). However, different cortical circuits rely on their specific synaptic connectivity to perform specialized functions. Models show that neurons must fire periodically when integrating random synaptic inputs (Softky and Koch, 1993). What gives rise to the experimentally observed features that are largely conserved across cortical circuits?
The ubiquitous features of synaptic connectivity and neural spiking activity have been explained separately, with assumptions made about the other. When each neuron is modeled to fire irregularly with a Poisson process, synaptic plasticity rules can shape randomly connected networks to exhibit the experimentally observed over-represented patterns (Kozloski and Cecchi, 2010; Babadi and Abbott, 2013). In terms of activity, networks with sparse and strong synaptic connectivity patterns can yield irregular spiking activity through balanced excitatory and inhibitory synaptic inputs at each neuron (van Vreeswijk and Sompolinsky, 1996, 1998; Shadlen and Newsome, 1998). However, it remains unclear whether such features of synaptic connectivity and neural spiking activity can be explained simultaneously without assumptions about each other.
Zhang et al. (2019) hypothesized that experimentally observed features of both synaptic connectivity and neural spiking activity in local cortical circuits can be attained when the network optimizes its synaptic connectivity for associative memory storage. They modeled the local cortical circuit with the McCulloch and Pitts (1943) neural network model, which included both excitatory and inhibitory neurons. Each neuron was represented by a binary unit that is either spiking (activity = 1) or silent (activity = 0) at each simulation step; the unit only spikes if its summed synaptic inputs at the previous time step exceeds its predefined spike threshold. To simulate biological noise such as spontaneous neural activity and synaptic failure, the model also incorporated noise in the synaptic inputs to each unit. All units were initially interconnected and the synaptic connections were adjusted to store a set of predefined associative memories. Each memory was represented by a pair of network states x and x′, each defined by a randomly chosen binary activity pattern of units in the model. To load the memory into the network, synaptic connection weights were adjusted such that when the network is initialized at state x, it produces activity as close to x′ as possible at the next simulation step.
The model was primarily constrained by two parameters: robustness and memory load. Robustness refers to the amplitude of noise permissible in the synaptic input such that the network can still successfully store memories. Memory load refers to the number of associative memories that the network optimizes its synaptic connections to store. The model cannot store an unlimited amount of memories given its finite number of units (Cover, 1965; Gardner and Derrida, 1988) and the optimized synaptic connectivity depends on the number of memories loaded.
By systematically examining network models with different levels of memory load and robustness, Zhang et al. (2019) found that within a certain range of parameters, there exists a set of networks that exhibit experimentally observed over-representation of bidirectional pairs and specific three-neuron connectivity patterns, as well as irregular neural spiking activity. Moreover, the parameter regime where features of biological circuits were reproduced overlapped with the regime in which memory load was at maximum capacity for the corresponding robustness level. That is, given each level of robustness, the amount of memory load stored in the network that gave rise to the experimentally observed features matched the maximum amount of memories that neurons in the network could store and retrieve successfully. This suggests that local circuits in the brain might have evolved to store the near-maximum amount of memories.
Although over-representation of bidirectionally connected neuron pairs is found ubiquitously across different cortical areas, the over-representation ratio (the ratio between the number of bidirectional connections observed and the expected number of such connections in random networks with equivalent connection rates) varies from 1 in barrel cortex to 2 in visual cortex and 4 in prefrontal cortex (Song et al., 2005; Wang et al., 2006; Lefort et al., 2009). Zhang et al. (2019) explain the wide experimentally observed range of over-representation ratios with the correlation strength between network state pairs stored in the network as associative memory.
The network successfully stores an associative memory pair of x and x′ if, when the activity is initialized at state x, it transforms the activity pattern to x′ at the subsequent simulation step. The two network states x and x′ can vary from being identical (correlation = 1) to totally different (correlation = 0). In other words, networks that store associative memory pairs with a correlation of 1 stay in activity state x once initialized at it, making the network activities strongly temporally correlated. As the correlation strength of the associative memory pair decreases, x and x′ become less similar. Networks that store associative memory pairs with a correlation of 0 transforms activity x to a different state x′ at the next time step, so the network activities are not temporally correlated.
Zhang et al. (2019) found that for networks optimized to store associative memory pairs with correlations that increase from 0 to 1, the over-representation ratio of excitatory-excitatory neural pairs increases monotonically from 1 to 4, recapitulating the experimentally observed range. At the same time, the over-representation ratio of excitatory–inhibitory neuron pairs decreases monotonically, which leads to the prediction that cortical areas with high excitatory–excitatory over-representation ratios will exhibit low excitatory–inhibitory over-representation ratios. Future experiments should test this prediction.
In addition, the finding that networks optimized to store more temporally correlated activity patterns show higher ratios of bidirectional connectivity suggests that associative learning with different temporal correlations can lead to reliable footprint in the network reciprocal connections. Notably, the ratio of bidirectional pairs in higher cortical areas such as the prefrontal cortex is higher than sensory areas such as the visual cortex (Wang et al., 2006). This observation is in line with the findings that prefrontal cortex shows stronger persistent activities, which may support working memory (Curtis and D'Esposito, 2003); whereas visual cortex shows transient activity in response to rapidly changing external stimuli (Murray et al., 2014). Emerging EM datasets that enable characterizations of bidirectional overrepresentation in different areas of cortex would provide further clues into network functions as associative learning and contribute to the exciting field of relating connectivity motifs to different aspects of computation.
In summary, the paper by Zhang et al. (2019) shows that, if a local cortical circuit is assumed to optimize its synaptic connectivity for storing associative memories, the network exhibits the ubiquitously observed features of both synaptic connectivity and neural spiking activity simultaneously, when stored memory is at capacity. Previous works have explained some of these features under the same associative memory assumption. Specifically, when associative memories are loaded to capacity, varying robustness can uncover networks that exhibit experimentally observed features of synaptic connectivity (Chapeton et al., 2012, 2015; Brunel, 2016), and optimizing for robustness can yield the balanced state required for irregular spiking (Rubin et al., 2017). Zhang et al. (2019), however, make no initial assumptions about the memory load or robustness of the network and discover that networks exhibiting experimentally observed features are in the regime of maximum memory load and large robustness. Thus, Zhang et al. (2019) show that the ubiquitous cortical synaptic connectivity and neural spiking activity features can arise naturally as a result of optimizing network connectivity for associative memory storage.
The cortex is not a static network with fully optimized synaptic connectivity but in a constantly changing state across multiple time scales. At short time scales, synapses between neurons change in an activity-dependent manner leading to synaptic facilitation and depression. If the network is optimized for storage at full capacity as suggested by the paper by Zhang et al. (2019), then the network must forget some memories to store new ones and stay updated with the external environment. Relating how synaptic connectivity changes as the network turns over its storage can be pursued in future work by adding synaptic mechanisms for forgetting and activity-independent fluctuations in synaptic strength. At the developmental time scale, the synaptic connectivity pattern is not only affected by experience-dependent plasticity but also shaped by mechanisms such as genetic guiding events (Williams et al., 2010) and resource conservation constraints (Ramon y Cajal, 1899). In biological systems, synaptic connectivity is most likely preconfigured genetically and continuously tuned by a combination of activity-dependent plasticity rules and activity-independent cellular mechanisms. Which observed features are constrained by which mechanisms is an exciting topic for future research.
Footnotes
- Received October 27, 2019.
- Revision received February 3, 2020.
- Accepted February 8, 2020.
Editor's Note: These short reviews of recent JNeurosci articles, written exclusively by students or postdoctoral fellows, summarize the important findings of the paper and provide additional insight and commentary. If the authors of the highlighted article have written a response to the Journal Club, the response can be found by viewing the Journal Club at www.jneurosci.org. For more information on the format, review process, and purpose of Journal Club articles, please see https://www.jneurosci.org/content/jneurosci-journal-club.
I thank Dr. Rich Pang, Dr. Ramakrishnan Iyer, Dr. Brian Hu, Tamina Keira Ramirez, Dr. Stefano Recanatesi, Dr. Eric Shea-Brown, and Dr. Stefan Mihalas for comments on the paper.
The authors declare no competing financial interests.
- Correspondence should be addressed to Jiaqi Shang at jshang6{at}uw.edu
- Copyright © 2020 the authors