Elsevier

Neural Networks

Volume 19, Issues 6–7, July–August 2006, Pages 734-743
Neural Networks

2006 Special Issue
Homeostatic synaptic scaling in self-organizing maps

https://doi.org/10.1016/j.neunet.2006.05.006Get rights and content

Abstract

Various forms of the self-organizing map (SOM) have been proposed as models of cortical development [Choe Y., Miikkulainen R., (2004). Contour integration and segmentation with self-organized lateral connections. Biological Cybernetics, 90, 75–88; Kohonen T., (2001). Self-organizing maps (3rd ed.). Springer; Sirosh J., Miikkulainen R., (1997). Topographic receptive fields and patterned lateral interaction in a self-organizing model of the primary visual cortex. Neural Computation, 9(3), 577–594]. Typically, these models use weight normalization to contain the weight growth associated with Hebbian learning. A more plausible mechanism for controlling the Hebbian process has recently emerged. Turrigiano and Nelson [Turrigiano G.G., Nelson S.B., (2004). Homeostatic plasticity in the developing nervous system. Nature Reviews Neuroscience, 5, 97–107] have shown that neurons in the cortex actively maintain an average firing rate by scaling their incoming weights. In this work, it is shown that this type of homeostatic synaptic scaling can replace the common, but unsupported, standard weight normalization. Organized maps still form and the output neurons are able to maintain an unsaturated firing rate, even in the face of large-scale cell proliferation or die-off. In addition, it is shown that in some cases synaptic scaling leads to networks that more accurately reflect the probability distribution of the input data.

Introduction

The self-organizing map (SOM), in its various forms, has been a useful model of cortical development (Choe and Miikkulainen, 2004, Kohonen, 2001, Obermayer et al., 1992, Sirosh and Miikkulainen, 1997). Sirosh and Miikkulainen (1997) showed the simultaneous development of receptive field properties and lateral interactions in a realistic model. The usefulness of the developed lateral connections was shown by Choe and Miikkulainen (2004) for contour integration and segmentation. It is this lateral connectivity that ensures the neighboring neurons come to respond to similar stimuli and form a good map.

In these models, Hebbian learning is used to strengthen associations between stimuli and winning neurons. This type of associative learning has been well documented in the experimental literature (Bi and Poo, 2001, Bliss and Lomo, 1973), but our understanding has remained incomplete. It is well known that the most straightforward implementations of Hebbian learning lead to unconstrained weight growth. To counteract this problem, typical SOM algorithms use weight normalization: after each learning iteration all the weights converging onto a neuron are divided by the sum of the incoming weights (or the square root of the sum of the squared weights). It has been argued that this type of weight normalization is biologically plausible. For example, a neuron might have a finite resource necessary for maintaining incoming synapses. This might keep an upper limit on the total summed size of the incoming synapses. While this sounds within the realm of biological possibility, and is obviously helpful in keeping Hebbian learning in check, little evidence from the experimental literature is available for support.

More plausible mechanisms for controlling the Hebbian process based on maintaining an average output firing rate have recently emerged. Two different types of these internal mechanisms have been found. One type controls the intrinsic excitability of the neuron (reviewed by Zhang and Linden (2003)). The molecular causes underlying this mechanism are still being investigated, but some of the behavior has been documented. In a typical experiment, a neuron would be excited repeatedly at high frequency. Then the output firing rate would be measured when current is injected. The intrinsic excitability (the ratio of firing rate to injected current) is higher after the stimulation. This makes the neuron even more sensitive to its inputs.

Two models have proposed that neurons modify their excitability to maintain a high rate of information transfer. In the first of these models (Stemmler & Koch, 1999), individual neurons change their ion channel conductances in order to match an input distribution. The neurons are able to maintain high information rates in response to a variety of distributions. The second model (Triesch, 2004) proposes that neurons adjust their output nonlinearities to maximize information transfer. There it is assumed that the neuron can keep track of its average firing rate and average variance of firing rate. Given this limited information, it does the best it can by adjusting the slope and offset of an output sigmoid function.

In the second type of internal neuronal mechanism it was shown (Maffei et al., 2004, Turrigiano et al., 1998, Turrigiano and Nelson, 2004) that neurons in the cortex actively maintain an average firing rate by scaling their incoming weights. The mechanism has been examined in cultures and in other experiments using in vivo visual deprivation. It has been shown that the incoming synapses are altered by a multiplicative factor, which presumably preserves the relative strengths of the synapses. The underlying mechanisms are not yet known, but there is ongoing research looking at intracellular chemical factors such as calcium and brain-derived neurotrophic factor (BDNF) (Turrigiano & Nelson, 2004). The levels of these factors are related to firing rates, so integrating them over time could lead to an estimate of average firing rate and produce a chemical signal for synaptic change. Another interesting finding is that a neuron with high average firing rate will decrease the strength of incoming excitatory synapses, but increase the strength of incoming inhibitory neurons (Maffei et al., 2004), thus altering the excitatory/inhibitory balance.

Homeostatic mechanisms have been implemented in two models. In one study, the molecular underpinnings of homeostasis were explored (Yeung, Shouval, Blais, & Cooper, 2004). It was suggested that LTP, LTD, and synaptic homeostatic scaling are all related through intracellular calcium levels. The homeostatic mechanism may influence the level of calcium, thereby changing under what conditions LTP and LTD are induced (since calcium levels play a major role in LTP and LTD). The other model concerns a particular class of associative memories (Chechik, Meilijson, & Ruppin, 2001). It is shown that the storage capacity can be increased if the neuron’s weights are controlled with either weight normalization or homeostatic synaptic scaling.

There have also been attempts to change neuron thresholds within neural networks. One network that was described (Horn & Usher, 1989) was a Hopfield network with neurons that were always producing one of two possible outputs (+1 or −1). The neuronal thresholds for determining which state to be in, acting on the total amount of input activation, was adjusted according to the recent activation. They reported that interesting periodic dynamics emerged. Another adjustable threshold network (Gorchetchnikov, 2000) was a winner-take-all associative memory. In that work, the thresholds on output sigmoid functions were adjusted. It was shown that on problems like XOR classification, the network would learn the appropriate mapping faster than without an adjustable threshold. While these approaches are computationally interesting, there is not yet evidence that neurons adjust their threshold based on their average firing rate.

It has previously been suggested that homeostatic synaptic scaling might form a basis for keeping Hebbian learning in check (Miller, 1996, Turrigiano and Nelson, 2004). This possibility is explored here. It may very well be that this mechanism is one of several that constrain synapse strength, but it is examined here in isolation to get a better understanding of its capability.

Section snippets

Architecture with homeostatic synaptic scaling

The SOM model is trained with a series of episodes in which randomly selected input vectors are presented. At each step, the input vector, x, is first multiplied by the feedforward weights, WFF. In order to get the self-organizing map effect, this feedforward activity is then multiplied by a set of lateral connections, Wlat. The patterns of weights that the neurons send out laterally are identical to a preset prototype, so multiplying by Wlat is equivalent to taking a convolution with a

Simulation results

Self-organizing maps were simulated using the synaptic scaling described with the previous equations. The input vectors used in these simulations are specified by a 1-D Gaussian shape (standard deviation σ of 15 units) and are presented one per episode. For each episode, the input Gaussian is centered on one of the input units selected at random. The input locations are drawn from a uniform distribution for the experiments described in Sections 3.1 Homeostasis and map formation, 3.2 Synapse

Conclusions

In this work, we have proposed a way to go beyond the standard weight normalization. This long-used measure has worked to counteract the unconstrained growth of Hebbian learning, but there is little evidence from experiments that justifies its use. Homeostatic synaptic scaling, on the other hand, has been seen recently in biological experiments. It has been shown here that using homeostatic synaptic scaling in place of standard weight normalization still leads to proper organized map formation.

Acknowledgements

We would like to thank Jochen Triesch for helpful comments on this work. This material is based upon work supported by the National Science Foundation under NSF Career Grant No. 0133996 and was also supported by NSF IGERT Grant #DGE-0333451 to GW Cottrell. This work was also supported by an NIH Cognitive Neuroscience Predoctoral Training Grant.

References (24)

  • T. Kohonen

    Self-organized formation of topologically correct feature maps

    Biological Cybernetics

    (1982)
  • T. Kohonen

    Self-organizing maps

    (2001)
  • Cited by (28)

    • Computational models of memory consolidation and long-term synaptic plasticity during sleep

      2019, Neurobiology of Learning and Memory
      Citation Excerpt :

      It is also worth mentioning a completely different approach, unconcerned with detailed biophysics, to seek computational support for the behavioral consequences of SHY. Sullivan and de Sa (2008) applied a homeostatic synaptic scaling algorithm for abstract self-organizing maps to contrast with the “standard normalization of weights” often used to neutralize the unconstrained growth of Hebbian learning (Sullivan & de Sa, 2006). Self-organizing maps are an artificial neural network formation used for unsupervised feature learning from an input dataset (Kohonen, 1982).

    • Synaptic plasticity

      2019, Synaptic Transmission
    • Neural Computation Theories of Learning

      2017, Learning and Memory: A Comprehensive Reference
    • Tuning into diversity of homeostatic synaptic plasticity

      2014, Neuropharmacology
      Citation Excerpt :

      During early childhood, the brain experiences intense growth and development and these normal processes have been linked to enhanced susceptibility to seizure in young children (Wong, 2005). However, the overall risk of pediatric seizure remains relatively low considering the breadth of developmental changes at play, prompting some to hypothesize that homeostatic mechanisms exist to stabilize neural networks during development (Davis, 2006; Turrigiano and Nelson, 2004) and during mature brain function (Lazar et al., 2009; Sullivan and de Sa, 2006, 2008; Toyoizumi et al., 2005; Turrigiano, 2012, 2008; Watt and Desai, 2010; Yeung et al., 2004), but see (de Vries and van Slochteren, 2008; Gilson and Fukai, 2011; Houweling et al., 2005; Thivierge and Cisek, 2008). Several distinct homeostatic plasticity mechanisms have been described, each in principle providing neurons the means to tune and maintain overall levels of spiking activity within biologically-determined set points.

    • The Self-Tuning Neuron: Synaptic Scaling of Excitatory Synapses

      2008, Cell
      Citation Excerpt :

      Fortunately, there is a simple and very general solution to many aspects of this stability problem. Theoretical work has suggested that if each neuron can sense how active it is and adjust its synaptic weights up or down to keep this activity close to some set-point value, network activity will remain stable in the face of correlation-based changes in synaptic strength or developmental changes in synaptic connectivity (Figure 1C) (Miller, 1996; Sullivan and de Sa, 2006). A decade ago, just such a mechanism was discovered in neocortical neurons and dubbed “synaptic scaling” because it was observed to globally scale all of a neuron's synapses up or down in strength in the correct direction to stabilize neuronal firing (Turrigiano et al., 1998).

    View all citing articles on Scopus
    View full text