A scale-invariant internal representation of time

Neural Comput. 2012 Jan;24(1):134-93. doi: 10.1162/NECO_a_00212. Epub 2011 Sep 15.

Abstract

We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.

MeSH terms

  • Algorithms
  • Animals
  • Computational Biology / methods
  • Conditioning, Classical / physiology
  • Humans
  • Learning*
  • Memory / physiology
  • Memory, Episodic
  • Models, Neurological*
  • Neural Networks, Computer
  • Neurons / physiology
  • Time Factors