Elsevier

Neural Networks

Volume 17, Issue 1, January 2004, Pages 5-27
Neural Networks

Self-organising continuous attractor networks with multiple activity packets, and the representation of space

https://doi.org/10.1016/S0893-6080(03)00210-7Get rights and content

Abstract

‘Continuous attractor’ neural networks can maintain a localised packet of neuronal activity representing the current state of an agent in a continuous space without external sensory input. In applications such as the representation of head direction or location in the environment, only one packet of activity is needed. For some spatial computations a number of different locations, each with its own features, must be held in memory. We extend previous approaches to continuous attractor networks (in which one packet of activity is maintained active) by showing that a single continuous attractor network can maintain multiple packets of activity simultaneously, if each packet is in a different state space or map. We also show how such a network could by learning self-organise to enable the packets in each space to be moved continuously in that space by idiothetic (motion) inputs. We show how such multi-packet continuous attractor networks could be used to maintain different types of feature (such as form vs colour) simultaneously active in the correct location in a spatial representation. We also show how high-order synapses can improve the performance of these networks, and how the location of a packet could be read by motor networks. The multiple packet continuous attractor networks described here may be used for spatial representations in brain areas such as the parietal cortex and hippocampus.

Introduction

‘Continuous attractor’ neural networks are neural networks which are able to maintain a localised packet of neuronal activity representing the current state of an agent in a continuous space, for example head direction or location in the environment, without external sensory input (Amari, 1977, Taylor, 1999). They are useful in helping to understand the representation of head direction (Redish et al., 1996, Skaggs et al., 1995, Stringer et al., 2002b, Zhang, 1996), place (Redish and Touretzky, 1998, Samsonovich and McNaughton, 1997, Stringer et al., 2002a), and in the primate hippocampus, spatial view (Stringer, Rolls, & Trappenberg, 2003a). Continuous attractor networks use excitatory recurrent collateral connections between the neurons to reflect the distance between the neurons in the state space (e.g. head direction space) of the agent. Global inhibition is used to keep the number of neurons in a bubble of activity relatively constant. In the applications of continuous attractor networks discussed above, where a network is required to represent only a single state of the agent (i.e. head direction, place or spatial view), it is appropriate for the continuous attractor networks to support only one activity packet at a time.

In this paper we propose that continuous attractor networks may be used in the brain in an alternative way, in which they support multiple activity packets at the same time. The stability of multiple activity packets in a single network has been discussed previously by, for example, Amari, 1977, Ermentrout and Cowan, 1979. Ermentrout and Cowan (1979) analysed neural activity in a two-dimensional (2D) network, demonstrating the existence of a variety of doubly periodic patterns as solutions to the field equations for the net activity. Amari (1977) considered a continuous attractor neural network in which the neurons are mapped onto a one-dimensional (1D) space x, where there are short range excitatory connections and longer range inhibitory connections between the neurons. If two activity packets are stimulated at separate locations in the same continuous attractor network, then the two packets may interact with each other. The neurons in the second packet will receive an input s(x) from the first packet. In this case the second activity packet moves searching for the maximum of s(x). The effect of the second packet on the first one is similar. Depending on the precise shape of the synaptic weight profile within the network, the effect of this interaction may be to draw the two packets together or repel them. If the two activity packets are far enough apart, then the gradient of the function s(x) may be close to zero, and the two packets will not affect each other (Amari, 1977). However, in this paper we investigate a more general situation in which a single continuous attractor network can maintain multiple packets of activity simultaneously, where individual packets may exist in different feature spaces. We show how such multi-packet continuous attractor networks could be used to maintain representations of a number of different classes of feature (such as particular line shapes and colour) simultaneously active in the correct location in their respective feature spaces, where such feature spaces might correspond to the egocentric physical space in which an agent is situated.

The above proposal is somewhat analogous to that described by Recce and Harris (1996). These authors developed a model that learned an egocentric map of the spatial features in a robot's environment. During navigation through the environment, the representations of the spatial features were used in conjunction with each other. That is, in a sense, the representations of the different spatial features were co-active in working memory. This provided a robust basis for navigation through the environment. However, the model developed by Recce and Harris (1996) was not a biologically plausible neural network model. In the work presented here, our aim is to develop biologically plausible neural network models that are capable of the simultaneous representation of many spatial features in the environment. The underlying theoretical framework we use to achieve this is a continuous attractor neural network which has been trained to encode multiple charts. Previous investigations with multiple activity packets in a multichart neural network have been described by Samsonovich (1998), who showed that multiple ‘discordant’ activity packets may co-exist and move independently of one another in such a network. Samsonovich (1998) also reported that the network could support activity packets simultaneously active on different charts. However, the simulation results shown in the paper were restricted to the case of multiple activity packets co-active on the same chart.

To elaborate, each neuron in the network might represent a particular class of feature (such as a straight edge, or red colour) in a particular egocentric location in the environment. Thus, each class of feature is represented by a different set of neurons, where each of the neurons responds to the presence of a feature at a particular location. The different sets of neurons that encode the different features may have many cells in common and so significantly overlap with each other, or may not have cells in common in which case each neuron will respond to no more than one feature. For each type of feature, the ordering in the network of the neurons that represent the location of the feature in space is random. Therefore, each separate feature space has a unique ordering of neurons which we refer to as a ‘map’. However, all the feature maps are encoded in the same network. The presence of a particular feature at a particular location in the environment is represented by an activity packet centred at the appropriate location in the map which is associated with that particular feature. The network we describe can maintain representations of a particular combination of, e.g. colour features in given relative egocentric spatial positions in the appropriate maps, and simultaneously maintain active another combination of, e.g. shape features in given relative spatial positions. Considered in another way, the network can model several different state spaces, with no topological relation between the positions of the neurons in the network and the location that they encode in each state space. The topology within each state space is specified by the connection strengths between the neurons, with each synapse representing the distance that two neurons are apart from each other in the state space. In the example above, one state space might represent the egocentric location of a straight edge in the physical environment, and another state space might represent the egocentric location of the colour red in the physical environment. In this example, all the state spaces are mapped onto the same physical environment, but this need not be the case.

Moreover, in this paper we show how the absolute spatial locations of the packets can be moved together (or independently) within the separate feature spaces using, for example, an idiothetic (self-motion) signal. Furthermore, the locations of the activity packets in the separate feature maps can be kept in relative alignment during movement of the agent with respect to, for example, an object which consists of a combination of features. We thus note that this architecture has implications for understanding feature binding. Since the network is able to maintain and update the representations of many different (e.g. shape) features simultaneously (which implies binding) using an idiothetic signal, this means that the network is able to maintain a full three-dimensional (3D) representation of the spatial structure of an agent's environment, even as the agent moves within its environment in the absence of visual input.

Section snippets

Model 1: network model with low order synapses

In this section we present Model 1, which is a continuous attractor network that is able to stably maintain simultaneously active the representations of multiple features each one of which is in its own location. The model allows the relative spatial location of each feature to be fixed relative to each other, in which case the agent can be thought of as moving through a fixed environment. The model also allows for the case where each feature can move to different locations independently. What

Model 2: Network model with higher order synapses

In Model 2 the recurrent connections within the continuous attractor network of feature cells employ Sigma–Pi synapses to compute a weighted sum of the products of inputs from other neurons in the continuous attractor network. In addition, in Model 2 the Sigma–Pi synapses connecting the idiothetic cells to the continuous attractor network use even higher order combinations of pre-synaptic cells.

The general network architecture of Model 2 is shown in Fig. 9. The network architecture of Model 2

How the representations of multiple features within a continuous attractor network may be decoded by subsequent, e.g. motor systems

In this section we consider how subsequent, for example motor, systems in the brain are able to respond to the representations of multiple features supported by a continuous attractor network of feature cells. The execution of motor sequences by the motor system may depend on exactly which features are present in the environment, and where the features are located with respect to the agent. However, for both models 1 and 2 presented in this paper, if multiple activity packets are active within

Theoretical analysis of interaction between activity packets in different feature spaces

In Section 1 we reviewed the theoretical results of Amari (1977), which describe how multiple activity packets interact in a single feature space encoded in a continuous attractor network. As the numerical results in this paper have shown that multiple activity packets can co-exist in different feature spaces in the same network, we have developed a mathematical analysis of this scenario. The analysis examines how activity packets in different feature spaces, encoded within the same continuous

Discussion

In this paper we have explored how continuous attractor neural networks can be used to support multiple packets of neuronal activity, and how these separate representations may be simultaneously updated by external inputs to the continuous attractor network, where such external inputs might represent, for example, idiothetic inputs or motor efference copy. To achieve this, the continuous attractor networks presented here make use of (i) a recurrent synaptic connectivity that encodes each

Acknowledgements

This research was supported by the Wellcome Trust, by the Human Frontier Science Program the MRC (by a Programme Grant to E. T. Rolls), and by the MRC Interdisciplinary Research Centre for Cognitive Neuroscience.

References (35)

  • G. Wallis et al.

    Invariant face and object recognition in the visual system

    Progress in Neurobiology

    (1997)
  • S. Amari

    Dynamics of pattern formation in lateral-inhibition type neural fields

    Biological Cybernetics

    (1977)
  • F.P. Battaglia et al.

    Attractor neural networks storing multiple space representations: a model for hippocampal place fields

    Physical Review E

    (1998)
  • G.B. Ermentrout et al.

    A mathematical theory of visual hallucination patterns

    Biological Cybernetics

    (1979)
  • P. Földiák

    Learning invariance from transformation sequences

    Neural Computation

    (1991)
  • J. Hertz et al.

    Introduction to the theory of neural computation

    (1991)
  • C. Koch

    Biophysics of computation

    (1999)
  • M.C. Mozer

    The perception of multiple objects: A connectionist approach

    (1991)
  • J. O'Keefe et al.

    Hippocampal place units in the freely moving rat: why they fire where they fire

    Experimental Brain Research

    (1978)
  • S.M. O'Mara et al.

    Neurons responding to whole-body motion in the primate hippocampus

    Journal of Neuroscience

    (1994)
  • M. Recce et al.

    Memory for places: a navigational model in support of Marr's theory of hippocampal function

    Hippocampus

    (1996)
  • A.D. Redish

    Beyond the cognitive map: From place cells to episodic memory

    (1999)
  • A.D. Redish et al.

    A coupled attractor model of the rodent head direction system

    Network: Computation in Neural Systems

    (1996)
  • A.D. Redish et al.

    The role of the hippocampus in solving the Morris water maze

    Neural Computation

    (1998)
  • E.T. Rolls et al.

    The receptive fields of inferior temporal cortex neurons in natural scenes

    Journal of Neuroscience

    (2002)
  • E.T. Rolls et al.

    Visual scene perception

    (2003)
  • E.T. Rolls et al.

    Computational neuroscience of vision

    (2002)
  • Cited by (0)

    View full text