Elsevier

Neural Networks

Volume 14, Issues 6–7, 9 July 2001, Pages 815-824
Neural Networks

2001 Special issue
Distributed synchrony in a cell assembly of spiking neurons

https://doi.org/10.1016/S0893-6080(01)00044-2Get rights and content

Abstract

We investigate the formation of a Hebbian cell assembly of spiking neurons, using a temporal synaptic learning curve that is based on recent experimental findings. It includes potentiation for short time delays between pre- and post-synaptic neuronal spiking, and depression for spiking events occurring in the reverse order. The coupling between the dynamics of synaptic learning and that of neuronal activation leads to interesting results. One possible mode of activity is distributed synchrony, implying spontaneous division of the Hebbian cell assembly into groups, or subassemblies, of cells that fire in a cyclic manner. The behavior of distributed synchrony is investigated both by simulations and by analytic calculations of the resulting synaptic distributions.

Introduction

Consider the process of formation of a Hebbian cell assembly. Conventional wisdom would proceed along the following line of reasoning: start out with a group of neurons that are interconnected, using both excitatory and inhibitory cells. Feed them with a common input that is strong enough to produce action potentials, and let the excitatory synapses grow until a consistent firing pattern can be maintained even if the input is turned off. Using theoretical models of neuronal and synaptic dynamics, we follow this procedure and study the resulting firing modes. Although the model equations may be an oversimplification of true biological dynamics, the emerging firing patterns are intriguing and may connect to existing experimental observations.

Recent studies of firing patterns by Brunel (1999) have shown in simulations, and in mean-field calculations, that large scale sparsely connected neuronal networks can fire in different modes. Whereas strong excitatory couplings lead to full synchrony, weaker couplings will usually lead to asynchronous firing of individual neurons that can exhibit either oscillatory or non-oscillatory collective behavior. For fully connected networks, there exists evidence of the possibility of cluster formations, where the different neurons within a cluster fire synchronously. This phenomenon was analyzed by Golomb, Hansel, Shraiman and Sompolinsky (1992) in a network of phase-coupled oscillators, and was studied in networks of pulse-coupled spiking neurons by van Vreeswijk (1996) and by Hansel, Mato and Meunier (1995).

In contrast to previous studies, the present investigation concentrates on the study of a network storing patterns via Hebbian synapses. We mainly concentrate on a single Hebbian cell-assembly, where full connectivity is assumed between all excitatory neurons. We employ synaptic dynamics that are based on the recent experimental observations of Markram et al., 1997, Zhang et al., 1998. They have shown that potentiation or depression of synapses connecting excitatory neurons occurs only if both pre- and post-synaptic neurons fire within a critical time window of approximately 20 ms. If the pre-synaptic neurons fires first, potentiation will take place. Depression is the rule for the reverse order. The regulatory effects of such a synaptic learning curve on the synapses of a single neuron that is subjected to external inputs were investigated by Song, Miller and Abbott (2000) and by Kempter, Gerstner and van Hemmen (1999). We investigate here the effect of such a rule within an assembly of neurons that are all excited by the same external input throughout a training period, and are allowed to influence one another through their resulting sustained activity. We find that this synaptic dynamics facilitates the formation of clusters of neurons, thus splitting the Hebbian cell-assembly into subassemblies and producing the firing pattern that we call distributed synchrony (DS).

In the next section we present the details of our model. It is based on excitatory and inhibitory spiking neurons. The synapses among excitatory neurons undergo learning dynamics that follow an asymmetric temporal rule of the kind observed by Markram et al., 1997, Zhang et al., 1998. We study the resulting firing patterns and synaptic weights in 3 Dynamical attractors in Hebbian assemblies, 4 Stability of a cycle. The phenomenon of distributed synchrony is displayed and discussed. To understand it better, we perform in 5 The two-neurons synaptic matrix, 6 Analysis of a cycle a theoretical analysis of the influence of an ordered firing pattern on the development of the synaptic couplings. This is derivable in a two-neuron model, and is compared with the results of simulations on a network of neurons. In Section 7 we proceed to demonstrate that similar types of dynamics may appear also in the presence of multiple memory states. A first version of our model was presented in Horn, Levy, Meilijson and Ruppin (2000).

Section snippets

The model

We study a network composed of NE excitatory and NI inhibitory integrate-and-fire neurons. Each neuron in the network is described by its subthreshold membrane potential Vi(t) obeyingV̇i(t)=−1τnVi(t)+RIi(t),where τn is the neuronal membrane decay time constant. A spike is generated when Vi(t) reaches the threshold θ, upon which a refractory period of τR sets in and the membrane potential is reset to Vreset where 0<Vreset<θ. For simplicity we set the level of the rest potential to 0. Ii (t) is

Dynamical attractors in Hebbian assemblies

We start by studying the behavior of the network described in the previous section using numerical simulations. We look at the types of dynamical attractors the excitatory network flows into, starting from random firing induced by stochastic inputs. We find that in addition to synchronous and asynchronous dynamical attractors, a mode of distributed synchrony (DS) emerges. In this state, the network breaks into n groups, or subassemblies, of neurons, each of which fires synchronously.

Fig. 2(a)

Stability of a cycle

A stable DS cycle can be simply understood when a single synaptic delay sets the basic step, or phase difference, of the cycle. When several delay parameters exist, a situation that probably more accurately represents the α-function character of synaptic transmission in cortical networks, distributed synchrony may still be obtained. In this case, however, the cycle may destabilize and regrouping may occur by itself as time goes on, because different synaptic connections that have different

The two-neurons synaptic matrix

The values of synaptic connections between excitatory neurons are governed by the kernel function K(tljtki) and by the temporal firing patterns of the two neurons. In this section, the synaptic matrix of a two-neuron system is analyzed in terms of these variables. We look at neurons i and j and at the synaptic connections wij and wji between them. The stationary joint density function f(wij,wji) of the two synaptic connections is calculated. This function is the probability of finding synaptic

Analysis of a cycle

As shown in the previous section, the phase shift between the firing times of two neurons characterizes their synaptic connections. These phase shifts are determined by the firing pattern of the network. By evaluating all of them, the synaptic distribution function for a network of NE neurons can be constructed. Assessing all the phase shifts for an arbitrary firing state may be difficult, but for the case of distributed synchrony, when these phase shifts take several distinct values, the

Overlapping cell assemblies

So far, we have followed the procedure, stated at the beginning of the Introduction, of formation of a Hebbian cell-assembly. We noted that it can break into several subassemblies forming a cycle of DS. If such a cell-assembly should represent some memory in an associative memory model, we have to consider the problem of encoding of multiple memories. As a first step toward answering this question, we will show in this section that overlapping DS synaptic matrices can be employed in a retrieval

Discussion

The asymmetric temporal nature of synaptic learning curves among excitatory neurons, as observed by Markram et al., 1997, Zhang et al., 1998, naturally leads to asymmetric and, to some extent antisymmetric, synaptic matrices. This is manifested in our various simulations, starting with Fig. 3, and in our analytic results. The main point that we make in this article is that this asymmetry helps to engrave and stabilize a cyclic firing pattern that we call distributed synchrony.

The system that we

References (19)

  • L.F. Abbott et al.

    Synaptic plasticity: taming the beast

    Nature Neuroscience Supplement

    (2000)
  • L.F. Abbott et al.

    Synaptic depression and cortical gain control

    Science

    (1997)
  • M. Abeles

    Local cortical circuits

    (1982)
  • E. Bienenstock

    A model of neocortex

    Network: Computation in Neural Systems

    (1995)
  • Brunel, N. (1999). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of...
  • C.W. Gardiner

    Handbook of stochastic methods

    (1985)
  • D. Golomb et al.

    Clustering in globally coupled phase oscillators

    Physical Review A

    (1992)
  • D. Hansel et al.

    Synchrony in excitatory neural networks

    Neural Computation

    (1995)
  • M. Herrmann et al.

    Analysis of synfire chains

    Network: Computation in Neural Systems

    (1995)
There are more references available in the full text version of this article.

Cited by (92)

  • Automata complete computation with Hodgkin–Huxley neural networks composed of synfire rings

    2020, Neural Networks
    Citation Excerpt :

    When provided with a synaptic scaling learning rule, the networks also unveil the organization of synfire-like patterns of activity (Buonomano, 2005). The incorporation of a spike-timing dependent synaptic plasticity (STDP) rule leads to the formation of sub-assemblies of cells firing in cyclic manner – synfire ring-like structures – referred to as distributed synchrony (DS) cycles (Horn et al., 1999; Levy et al., 2001). Notably, the consideration of STDP together with axon remodeling yields the development of long synfire chains (Jun & Jin, 2007).

  • Optimal trajectories of brain state transitions

    2017, NeuroImage
    Citation Excerpt :

    One key challenge hampering progress is the complexity of these trajectories, which stems in part from the architectural complexity of the underlying anatomy (Hermundstad et al., 2011, 2013, 2014). Different components (neurons, cortical columns, and brain areas) are linked with one another in complex spatial patterns that enable diverse neural functions (Rajan et al., 2016; Fiete et al., 2010; Levy et al., 2001). These structural interactions can be represented as a graph or network, where component parts form the nodes of the network, and where anatomical links form the edges between nodes (Bullmore and Sporns, 2009).

  • Recurrent Network Models of Sequence Generation and Memory

    2016, Neuron
    Citation Excerpt :

    Sequences can be produced by highly structured neural circuits or by more generic circuits adapted through the learning of a specific task. Highly structured circuits of this type have a long history (Kleinfeld and Sompolinsky, 1989; Goldman, 2009), e.g., as synfire chain models (Hertz and Prügel-Bennett, 1996; Levy et al., 2001; Hermann et al., 1995; Fiete et al., 2010), in which excitation flows unidirectionally from one active neuron to the next along a chain of connected neurons, or as ring attractor models (Ben-Yishai et al., 1995; Zhang, 1996), in which increased (central) excitation between nearby neurons surrounded by long-range inhibition and asymmetric connectivity are responsible for time-ordered neural activity. These models typically require imposing a task-specific mechanism (e.g., for sequences) into their connectivity, producing specialized networks.

  • Extrasynaptic glutamate NMDA receptors: Key players in striatal function

    2015, Neuropharmacology
    Citation Excerpt :

    To detect different network states from multidimensional datasets, we used term frequency-inverse document frequency numerical statistic (tf-idf). Defined structures in similarity maps (independent of the distance function chosen) have been widely used to detect recurrent patterns (Victor and Purpura, 1996; Levy et al., 2001; Schreiber et al., 2003; Morelli et al., 2006; Kreuz et al., 2007; Tiesinga et al., 2008). To find significant patterns of activity in similarity maps we computed the Hamming distance (Hamming, 1950) for each pair of columns formed by the normalized dot product of all possible vector pairs and obtained a new matrix T×T.

View all citing articles on Scopus
View full text