Abstract
Dendrites receive the vast majority of a single neuron's inputs, and coordinate the transformation of these signals into neuronal output. Ex vivo and theoretical evidence has shown that dendrites possess powerful processing capabilities, yet little is known about how these mechanisms are engaged in the intact brain or how they influence circuit dynamics. New experimental and computational technologies have led to a surge in interest to unravel and harness their computational potential. This review highlights recent and emerging work that combines established and cutting-edge technologies to identify the role of dendrites in brain function. We discuss active dendritic mediation of sensory perception and learning in neocortical and hippocampal pyramidal neurons. Complementing these physiological findings, we present theoretical work that provides new insights into the underlying computations of single neurons and networks by using biologically plausible implementations of dendritic processes. Finally, we present a novel brain–computer interface task, which assays somatodendritic coupling to study the mechanisms of biological credit assignment. Together, these findings present exciting progress in understanding how dendrites are critical for in vivo learning and behavior, and highlight how subcellular processes can contribute to our understanding of both biological and artificial neural computation.
Introduction
An essential role of neurons is to transform a barrage of synaptic input into a meaningful stream of action potential (AP) output. The vast majority of these excitatory and inhibitory inputs are received on the dendritic tree. In most pyramidal neurons, distal dendrites are innervated by long-range inputs, while the proximal, perisomatic dendrites typically receive more local inputs. Ex vivo experimental work has shown that this functional arrangement allows computations that extend far beyond the linear summation of synaptic input, such as independent processing of different inputs and supralinear integration across compartments. How these single-neuron properties translate into behaviorally relevant in vivo neural computations remains an exciting area of research that will provide critical insight into the computational properties of neural circuits, and provide crucial biological constraints for building more powerful and efficient artificial intelligence.
Dendrites possess an array of active and passive biophysical features, which allow them to differentially process various information streams (Rall, 1962; Larkum et al., 2009; Major et al., 2013; O'Hare et al., 2022). They exhibit passive cable properties which produce significant distance-dependent signal attenuation, allowing independent signal processing in different dendritic compartments. Dendrites also possess an array of voltage-dependent ion channels which allow them to actively produce sodium, NMDA, and calcium spikes, leading to considerable supralinear signal transformation (Schiller et al., 1997, 2000; Golding and Spruston, 1998; Larkum et al., 1999; Stuart and Häusser, 2001; Palmer et al., 2014). Calcium spikes, for example, have a profound impact on somatic excitability, triggering trains of high-frequency APs (Larkum et al., 1999; Waters et al., 2003; Williams and Stuart, 2003). The combination of these properties allows dendrites to nonlinearly integrate information and gate subthreshold signals. Furthermore, somatic APs can backpropagate into dendritic branches, and the temporal coincidence of somatic and dendritic activity produces strong calcium recruitment and somatic bursting. These properties imbue single neurons with significant computational capabilities, such as coincidence detection (Larkum et al., 1999; Stuart and Häusser, 2001; Schaefer et al., 2003), gain modulation (Larkum et al., 2004), and memory formation (Losonczy et al., 2008; Letzkus et al., 2015).
The subcellular organization of pyramidal neurons coincides with circuit-level connectivity motifs. In the mammalian neocortex, for example, pyramidal neurons are embedded into a strikingly conserved laminar architecture; while different cortical areas can have variations of these themes, the overall patterns are intriguingly similar. In primary sensory cortex, bottom-up input is received primarily in layer 4 (L4) and then distributed to proximal dendrites in layers 2/3 (L2/3) and 5 (L5), while long-range top-down projections from higher-level thalamus and cortical areas terminate in superficial layers, specifically layer 1 (L1). Recent literature suggests that this laminar differentiation is maintained at the single-cell level, where inputs from distinct cortical and subcortical regions are preferentially targeted to distinct dendritic compartments (Petreanu et al., 2009; Mao et al., 2011; Lafourcade et al., 2022). Interestingly, this subcellular targeting shows a high degree of cell type-specificity, with different innervation patterns depending on layer and cell class, which is still poorly understood (Mao et al., 2011). In conjunction with dendritic morphology, this input segregation endows pyramidal neurons with powerful computational properties that allow complex signal processing, such as multiplexing of top-down context-dependent and reward prediction error signals with bottom-up sensory information (Naud and Sprekeler, 2018).
Because of their complexity and experimental intractability, studying the electrophysiological properties of dendrites in the intact brain has historically been exceedingly difficult, limiting current literature to a small number of studies (Kamondi et al., 1998; Helmchen et al., 1999; Svoboda et al., 1999; Xu et al., 2012; Smith et al., 2013; Peters et al., 2014; Jelitai et al., 2016). With the advent of two-photon imaging and genetically engineered calcium indicators, new avenues have opened up to unravel and harness dendrites' computational potential. Over the past decade, an increasing number of studies have successfully used these new tools and have uncovered in vivo dendritic roles for enhanced sensory encoding and learning (Jia et al., 2011; Xu et al., 2012; Palmer et al., 2014; Cichon and Gan, 2015; Sheffield and Dombeck, 2015; Takahashi et al., 2016; Bittner et al., 2017; Ranganathan et al., 2018; Lacefield et al., 2019; Godenzini et al., 2021). Interestingly, studies where somatic and dendritic activity were simultaneously recorded have found highly correlated activity between the two compartments (Beaulieu-Laroche et al., 2019; Francioni et al., 2019; Kerlin et al., 2019; Otor et al., 2022), while parallel evidence has reported dendritic computations that are independent from somatic activity (Jia et al., 2010; Sheffield and Dombeck, 2015; Sheffield et al., 2017; Rashid et al., 2020; Voigts and Harnett, 2020).
Meanwhile, computational studies, increasingly well informed and constrained by biophysical findings, have proposed dendritic mechanisms (Ujfalussy et al., 2018; Richards et al., 2019; Francioni and Harnett, 2022) that may fundamentally change how we think about how the brain learns and guides behavior. Modeling studies have been formulated to combine the knowledge gathered from ex vivo and in vivo experiments to understand the underlying mechanisms. Recent biologically inspired models have shown how dendritic morphology and synaptic learning rules can dramatically increase the computational power of individual neurons and circuits (Guerguiev et al., 2017; Kirkpatrick et al., 2017; Richards et al., 2019; Bicknell and Häusser, 2021; Jones and Kording, 2021; Payeur et al., 2021; Greedy et al., 2022; Sezener et al., 2022). These models have formulated explicit mechanisms for dendritic function and synaptic plasticity that have in turn informed experiments. We are therefore now arriving at an exciting convergence point between biological and artificial neural network function.
Here, we review the latest advances in our understanding of active dendritic computations in the mammalian brain. First, we discuss recent biological insights into active dendritic processes and synaptic plasticity during learning and behavior. This is followed by recent theoretical work implementing biologically constrained pyramidal neuron models that feature the computational power of dendrites. We end this review by outlining outstanding questions in the field and future directions in our quest to unravel active dendritic processes in the brain.
Dynamics of dendritic encoding during learning
Flexible and dynamic encoding of synaptic information by cortical dendrites plays a key role in learning. Since most synaptic input onto neurons occurs onto the dendritic arbor and their protruding spines, dendritic responses to incoming information must be dynamic throughout learning. How dendrites dynamically and flexibly alter their activity during learning is largely unknown and is the basis of much research. Sensory association is an ideal behavior for investigating how cortical dendrites can modify their activity during learning. Here, a particular sensory input is associated with a reward in a Go/NoGo paradigm (Carandini and Churchland, 2013). Mice are trained to “Go” (lick or lever push) within a given time period after receiving a sensory stimulus to receive a reward. In contrast, mice must withhold their response in “NoGo” trials. During this paradigm, sensory (feedforward) information would be expected to dominate the voltage response of pyramidal neurons within sensory cortices in the naive (inexperienced) state. At the dendrite level, this results in sparse coding of the sensory input (Murayama et al., 2009; Palmer et al., 2012, 2014; Smith et al., 2013; Takahashi et al., 2016). During learning of a sensory-association task, additional internally generated (feedback) pathways, including attention (Wimmer et al., 2015; Poort et al., 2022) and reward (Wickens et al., 2003), are active. In cortical neurons, these feedback pathways synapse onto different dendritic compartments compared with feedforward (sensory) pathways (Bannister, 2005; Petreanu et al., 2009; Wimmer et al., 2010). Despite their different dendritic targets, feedforward and feedback pathways can integrate at the level of a single neuron, leading to increased computational power and enhanced somatic output in pyramidal neurons (Larkum et al., 1999). Such integration of different pathways may be required throughout learning, when sensory input must shift from encoding the external environment to contributing to sensory-based behavior.
Using two-photon calcium imaging from the dendrites of cortical L2/3 pyramidal neurons, Palmer and colleagues investigate changes in neural encoding as mice learn to associate a sensory stimulus with a water reward (Godenzini et al., 2021). Overall, these studies illustrate that dendritic activity is dynamic and changes throughout behavior (Fig. 1A). Revealing how dendritic encoding of behaviorally relevant information changes throughout learning is crucial for understanding the role these neural processes may play in both the learning and execution of learnt sensory-based behavior.
Dendritic computations underlying experience-dependent hippocampal ensemble dynamics
The hippocampus contains neurons tuned to fire APs in particular spatial locations within an environment (O'Keefe and Dostrovsky, 1971). These so-called place cells are assumed to represent the cellular substrate of episodic memories (Moser et al., 2015); and collectively, ensembles of place cells are known to reorganize to support experience-dependent learning (Lever et al., 2002; Hartley et al., 2014).
Previous results have demonstrated that place cells are driven by a subset of inputs whose elevated synaptic weights provide localized excitation that exceeds a spatially uniform level of inhibition (Grienberger et al., 2017). This synaptic input pattern produces a slow ramp of membrane potential depolarization, which underlies location-specific AP firing (Harvey et al., 2009; Epsztein et al., 2011; Cohen et al., 2017). A novel form of synaptic plasticity, called behavioral timescale synaptic plasticity (BTSP), is a mechanism that mediates this increase in synaptic weights (Bittner et al., 2017). Alternatively, already strengthened or clustered excitatory synapses activated by afferent input can drive place cell activity (Sheffield and Dombeck, 2019), potentially by generating local dendritic spikes (Sheffield and Dombeck, 2015).
BTSP has several distinct characteristics, including that it requires dendritic plateau potentials (“plateau” for short) instead of APs. These plateau potentials are nonlinear, long-duration voltage signals initiated in the apical dendritic tuft, the contact site of the L3 entorhinal cortex (EC3) input (Steward and Scoville, 1976; Megías et al., 2001). Accordingly, previous in vitro and in vivo work has suggested a central role of the EC3 input in driving plateaus in individual CA1 neurons (Takahashi and Magee, 2009; Bittner et al., 2015), thus recruiting these cells into the place cell representation. Local inhibitory circuits may restrict this process (Turi et al., 2019; Geiller et al., 2020; Rolotti et al., 2022). Accordingly, two-photon Ca2+ imaging of dendrite targeting interneurons has shown that dendritic inhibition is reduced when an animal learns to navigate a novel environment, possibly enhancing a CA1 neuron's ability to fire dendritic plateaus (Sheffield et al., 2017). Notably, only a single plateau potential is sufficient to modify the strength of those synapses that are active in a seconds-long time window around the plateau (Bittner et al., 2015, 2017; Grienberger et al., 2017; Diamantaki et al., 2018; Zhao et al., 2020; Priestley et al., 2022). As a result, dendritic plateau potential-mediated BTSP provides a neural mechanism by which a single experience could produce learning-associated changes in brain activity (Magee and Grienberger, 2020). It may also allow hippocampal neurons to compute and store an association between an experience and a delayed outcome, offering a potential solution to the long-standing “temporal credit assignment” problem. Modeling results suggest that the seconds-long time course could result from the interaction of two dendritic signals, a slowly decaying eligibility trace produced by synaptic input, and a more global and faster-decaying signal associated with the plateau potential (Bittner et al., 2017). In conclusion, dendritic plateau potentials are poised to deliver a key signal instructing CA1 ensembles on how to represent an experience by inducing plasticity. Understanding the dendritic processes driving BTSP will provide important insights into how a cellular plasticity mechanism may govern learning and inform the development of new classes of artificial neural networks that use neurons endowed with biologically plausible synaptic properties to increase efficiency.
Active calcium currents in cortical dendrites during tactile detection
In cortical L5 pyramidal neurons, regenerative activation of voltage-sensitive calcium channels in apical dendrites leads to locally generated spikes, called dendritic calcium spikes (Yuste et al., 1994; Schiller et al., 1997; Larkum et al., 1999). Until recently, little was known about how calcium spikes function in vivo, but new imaging techniques and tools have finally allowed us to gain insights into their role in vivo (Helmchen et al., 1999; Chen et al., 2013). There is increasing evidence that calcium spikes are essential for shaping cortical activity in the living brain and animal behavior.
Studies by Takahashi and colleagues on perceptual thresholds provided concrete evidence for the active role of calcium spikes in tactile behavior (Takahashi et al., 2016, 2020). Using a whisker discrimination task in mice, they found strong activation of calcium spikes in L5 neurons in primary sensory cortex (S1), but only when the animals detected the stimuli. Strikingly, manipulation of calcium spikes shifted the mouse's perceptual threshold. Their results provide direct evidence that dendritic calcium spikes act as a gain modulator, boosting weak sensory inputs around the perceptual threshold to the supraliminal level (Fig. 1C).
How does this mechanism work in cortical operations? Takahashi et al. (2020) identified two important characteristics of calcium spikes during tactile detection: pathway specificity and context dependence. L5 pyramidal neurons are grouped into two subtypes based on their projection targets (Harris and Shepherd, 2015); the first is intratelencephalic (IT) neurons that project to cortical areas and the striatum, and the second is extratelencephalic (ET) neurons that project to subcortical regions, including the thalamus, midbrain, and brainstem. Calcium spikes were present in both IT and ET neurons at baseline in awake mice, but activated almost exclusively in ET neurons during tactile detection, indicating that calcium spikes selectively activate the cortical-subcortical output pathway. Additionally, calcium spikes were shown to depend on context. Takahashi et al. (2020) monitored calcium spikes when mice were in two behavioral contexts: passively receiving whisker stimuli and performing the detection task for the same stimuli. In the passive context, the lick spout was removed, so whisker stimuli were no longer related to water rewards. Interestingly, calcium spikes were observed when mice were actively engaged in stimulus detection, but not when mice were passively receiving stimuli. Thus, calcium spikes selectively enhance sensory responses to behaviorally important inputs depending on context.
Calcium spikes are a unique cellular mechanism that modulates the gain of cortical responses to incoming sensory input in a context-dependent manner and directs the output to subcortical structures for detection. How this unique operation comes about remains unknown, but feedback inputs arriving at the distal dendrites and neuromodulation at the apical dendrites may play a role in regulating calcium spikes in space and time (Larkum et al., 2009; Labarrera et al., 2018; Williams and Fletcher, 2019).
Learning to harness dendritic computations
One of the most striking features of the brain is the rich diversity of neurons. Dendritic morphology and biophysical properties can vary dramatically from one cell to the next, although distinct classes of neurons can be grouped by shared characteristics (Harris and Spacek, 2016; Gouwens et al., 2019). This raises the fundamental question of whether different classes of neurons are specialized for particular dendritic computations. To address this, Bicknell and Häusser (2021) studied cell type-specific dendritic processing using a combination of biophysical modeling and machine learning.
Bicknell and Häusser (2021) took a bottom-up approach, representing the known biology of neurons with morphologically detailed models comprising reconstructed morphologies and a suite of voltage-dependent conductances. Inspired by a long-standing analogy between single pyramidal neurons and artificial neural networks (Poirazi et al., 2003; Beniaguev et al., 2021) (Fig. 1D), the models are then “trained” to implement a given computational task (i.e., a mapping from synaptic input to AP output). The biology embodied by a model is thereby translated into predictions about whether and how such a neuron could perform the task. The key ingredient is the learning rule. This takes the form of a system of equations that predict the moment-to-moment impact of changes in synaptic strength on somatic output, accounting for dendritic morphology, interactions between synapses, and the nonlinear dynamics of active conductances. This allows a model neuron to improve its performance over learning by making locally optimal synaptic weight updates. Crucially, Bicknell and Häusser (2021) showed that an individual synapse can accurately approximate this general rule using only locally available signals. Therefore, the proposed algorithm is both a powerful theoretical tool for probing the abilities of single neurons and a mechanistic hypothesis about how the brain could take advantage of dendritic processing.
The technique was first applied using an L2/3 pyramidal neuron model, challenged with a canonical nonlinear feature-binding problem (Legenstein and Maass, 2011; Cazé et al., 2013). While the morphologically detailed neuron learned to solve this task, a neuron model without dendrites could not. The detailed model also recruited a variety of mechanisms to do so, including supralinear NMDA spikes in basal dendrites, sublinear processing in apical dendrites, and local dendritic inhibition. A comparison of the full model to a purely passive version further revealed that active dendrites support greater task accuracy, flexibility, and noise robustness which are all key requirements of in vivo dendritic computation. The investigation has now been expanded to an entire database of experimentally validated models (Allen Institute for Brain Science, 2015), allowing a comprehensive comparison of multiple cell types. The results so far suggest a continuum of task specialization among neurons over a purely categorical distinction. Moving forward, it remains an open and exciting question what can be achieved at the circuit level when learning acts within the dendrites of diverse populations of highly expressive neurons, an intriguing biological counterpoint to the networks of point neurons that are the mainstay of artificial intelligence (Acharya et al., 2022).
Somatodendritic coupling during learning of a brain–computer interface task
Effective learning requires specifically changing the weights of synapses that contribute to behavioral output. Establishing how synaptic efficacy should change to optimize behavior is known as the credit assignment problem (Lillicrap et al., 2020). A powerful algorithm for solving credit assignment in machine learning is the backpropagation of error. Backpropagation updates synaptic weights of individual neurons according to their causal contribution to the network's output, unlike many other learning algorithms, such as reinforcement learning, which broadcast a single global reinforcing signal (Lillicrap et al., 2020).
The predominance of backpropagation in machine learning raises questions as to whether its core principles (i.e., the ability to generate neuron-specific teaching signals) underlie biological learning, too. However, backpropagation has been long considered biologically implausible (Lillicrap et al., 2020). This is because neurons are commonly thought of as single-compartment voltage-integrators, a feature hardly compatible with the ability to separate feedback error propagation from the feedforward flow of ascending information, a critical feature of backpropagation (Richards and Lillicrap, 2019). Corroborated by anatomic evidence demonstrating the separation of bottom-up and top-down inputs in the mammalian cortex, recent theoretical models suggest that individual neurons could overcome this limitation by semi-independently integrating these two input streams onto their perisomatic and distal apical tuft dendrites (Larkum et al., 1999; Larkum, 2013), respectively (Guerguiev et al., 2017; Sacramento et al., 2018; Richards and Lillicrap, 2019; Richards et al., 2019; Payeur et al., 2021; Greedy et al., 2022). Although uncovering the learning algorithms operating in the brain is the key to understanding biological intelligence, the hypothesis that dendrites could act as a neural substrate for implementing backpropagation remains untested. This is due, in large part, to the nature of conventional learning tasks, in which an experimenter is unable to rigorously define the set of optimal changes required for adaptive behavior (i.e., the cost function).
To solve this problem, Francioni and colleagues are applying a new approach based on a brain–computer interface in mice (Fig. 1E). Compared with other learning tasks, brain–computer interface tasks allow an experimenter to directly connect the activity of defined neurons to feedback stimuli and reward (Clancy et al., 2014; Neely et al., 2018; Athalye et al., 2020). This allows the experimenter to unambiguously formalize both the cost function and the error, defined as the distance between output and target activity at any given time.
Using this approach, combined with chronic simultaneous GCaMP imaging of somatic and dendritic activity, they are testing the hypothesis that backpropagation-like credit assignment signals manifest as differential soma-dendrite coordination. Francioni and colleagues focus their analysis on the relative amplitude of GCaMP transients concurrently recorded in the somatic and dendritic compartments. They hypothesize that changes in somatodendritic coupling are neuron-specific, depend on the causal contribution of those neurons to task performance, and correlate with changes in firing patterns over days. These data will highlight whether dendrites can act as a substrate for the implementation of backpropagation-like credit assignment in cortical circuits during learning.
Dendritic computation and optimization
If an animal is to optimize its behavior, it can do so by adapting the representations that participate in behavior. To do so, it is increasingly being recognized that the feedforward information must be guided by a feedback signal whose function is to optimize the representations. Neurobiologists have pointed out that the bitufted morphology of many principal neurons is an element of structure well suited for such operations (Körding and König, 2001; Larkum, 2013). Furthermore, dendrite-dependent bursting and short-term plasticity enable the communication of different streams of information to and from other neurons (Izhikevich, 2003; Naud and Sprekeler, 2018; Greedy et al., 2022). These capabilities allow networks to communicate signals that can be used to optimize representations through long-term synaptic plasticity. Payeur et al., (2021) have recently proposed a theory of the coordination of learning that underlies apical dendrites. In this theory, apical dendrites are required to segregate inputs whose function is to coordinate learning from inputs whose function is to propagate sensory information (Fig. 1F). The theory predicts specific features which are yet to be observed experimentally, such as polarization of short-term plasticity and error-triggered increases in the trial average fraction of bursts. Theoretical and experimental work is currently being devoted to further tests and refinements of this theory.
Past theoretical work has established the connection between the coordination of plasticity and attention (Roelfsema and Ooyen, 2005; Roelfsema and Holtmaat, 2018). Thus, the same properties of apical dendrites may also enable the hierarchical processing of attention signals, which can provide a context-dependent optimization of representations. Thus, in theory, well-separated and active apical dendrites provide the brain with powerful means for both long-term and context-dependent optimization of representations.
The answers that theoretical neuroscience can provide to the question of function of dendrites is limited by the computational tools available, particularly tools for model optimization. Still today, it is difficult to jointly optimize the parameters of excitability, compartmentalization, and connectivity. Researchers have developed compartmentalized models of neurons that vastly improved excitability parameter optimization in single cells with active dendrites (Ujfalussy et al., 2018; Bicknell and Häusser, 2021), but also methods for optimizing connectivity parameters in the presence of fixed dendritic excitability (Jones and Kording, 2021). Recently, computational work has generalized the applicability of deep learning tools to allow the joint optimization of dendritic excitability and connectivity parameters within recurrent and spiking neural networks (Harkin et al., 2022). This tool will allow a comparison between a network of neurons with dendrites and a network of neurons without dendrites but scaled to match the number of compartments, and thus to pin down the advantage of a dendritic architecture for specific tasks.
Discussion
The present review highlights recent experimental and theoretical advances in our understanding of dendritic mechanisms and their role in the intact brain. Specifically, we discuss how these processes (1) support differential integration of synaptic input to facilitate learning (Fig. 1A), (2) serve as the substrate for behaviorally relevant synaptic plasticity (Fig. 1B), (3) act as gain modulators that influence perception (Fig. 1C), and (4) endow single neurons with immense computational power in silico (Fig. 1D,F) and in vivo (Fig. 1E). The work presented here shows that neurons are not simple feedforward voltage integrators, but rather highly complex and compartmentalized, with a wide range of ethological consequences that are just now being uncovered.
These recent advances demonstrate that dendrites contribute to fundamental computational principles that govern pyramidal neuron dynamics and learning. Further work is necessary to understand how in vitro dendritic properties constrain in vivo circuit-level dynamics and, by extension, perception and behavior. Specifically, it is unclear to what extent dendrites enable subcellular mechanisms for computations, such as predictive coding, sensory associations, and credit assignment (Larkum, 2013; Sacramento et al., 2018; Greedy et al., 2022; Mikulasch et al., 2022), or whether these are mediated by circuit-level mechanisms. As such, elucidating the role of active dendritic processing in intact neural circuits will provide critical insights into biological computation, as well as new architectural motifs and constraints for artificial intelligence.
Obtaining spatiotemporally precise recordings that capture the intracellular dynamics underlying dendritic mechanisms in the intact brain poses one of the main barriers to understanding their functional roles. Current technologies are limited by physical and optical accessibility, or low chances for experimental success in exceedingly difficult preparations. In particular, the temporal resolution of calcium indicators and the nonlinear nature of fluorescence signals in response to underlying electrophysiological changes present formidable obstacles to some of the central questions. As the field builds up collective experience and knowledge on these issues, we become more confident in interpreting the underlying physiology. However, promising technological innovations have emerged over the past 5-10 years that are poised to shape the field's future.
One of these technologies is voltage indicators, which change their fluorescence based on transmembrane potential differences. Their temporal resolution far exceeds that of two-photon calcium imaging and further allows us to record subthreshold membrane potentials. Recording subcellular potentials from neuron populations hold tremendous promise for unraveling dendritic mechanisms in the intact brain. While their application in awake, behaving animals has been limited by the necessity of specialized equipment, the field has seen exciting progress over the last few years: with steadily improving signal-to-noise ratios, they are slowly becoming more and more compatible with awake, in vivo experiments (Villette et al., 2019; Fan et al., 2020).
Strategies for dendritic manipulation include subcellular targeting of dendritic domains with optogenetic (Takahashi et al., 2016; Ranganathan et al., 2018) or pharmacological methods (Xu et al., 2012; Cichon and Gan, 2015; Manita et al., 2015). These methods allow direct or indirect dendritic manipulations that can be coupled with electrophysiological or optical recording methods. However, they are limited by temporal resolution or anatomic specificity. Further developments, such as subcellular localized opsins, may allow for dendritic manipulation with unprecedented spatiotemporal precision in the intact brain.
The path ahead is an exciting one for in vivo dendrite research. The development of emerging technologies, coupled with the adoption of novel approaches, is enabling remarkable strides in our understanding of dendritic function. Further progress in the computational field is generating an ever more comprehensive theoretical framework to guide and interpret physiological experiments. Together, the field is poised to generate unprecedented insights into the fundamental principles of information processing in biological and artificial neural networks.
Footnotes
L.F. was supported by National Institutes of Health Grant R01NS106031. R.M.S.-A. and V.D.T. were supported by the Department of Brain and Cognitive Sciences at Massachusetts Institute of Technology. N.T. was supported by University of Bordeaux (Initiative of Excellence), Region Nouvelle-Aquitaine and Centre National de la Recherche Scientifique/Institut National de la Santé et de la Recherche Médicale ATIP-Avenir program. B.B. was supported by European Union Grant MSCA-IF 845956. L.M.P. was supported by NHMRC APP1086082 and APP1063533, Australian Research Council DP160103047, and Sylvia and Charles Viertel Charitable Foundation. C.G. was supported by the Smith Family Awards Program for Excellence in Biomedical Research and Air Force Office of Scientific Research Award FA9550-22-1-0344. We thank Mark T. Harnett and members of the Harnett laboratory for feedback on this manuscript.
The authors declare no competing financial interests.
- Correspondence should be addressed to Lukas Fischer at Lff{at}mit.edu