Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction

Ioannis Delis, Robin A.A. Ince, Paul Sajda and Qi Wang
Journal of Neuroscience 16 March 2022, 42 (11) 2344-2355; DOI: https://doi.org/10.1523/JNEUROSCI.0861-21.2022
Ioannis Delis
1School of Biomedical Sciences, University of Leeds, Leeds, LS2 9JT, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Robin A.A. Ince
2School of Psychology and Neuroscience, University of Glasgow, G12 8QQ, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Robin A.A. Ince
Paul Sajda
3Department of Biomedical Engineering, Columbia University, New York, New York 10027
4Data Science Institute, Columbia University, New York, New York 10027
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Paul Sajda
Qi Wang
3Department of Biomedical Engineering, Columbia University, New York, New York 10027
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.

SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.

  • active sensing
  • drift diffusion model
  • EEG
  • multisensory processing
  • partial information decomposition
  • perceptual decision-making

Introduction

In our daily lives, we make judgments based on noisy or incomplete information that we gather from our environment (Heekeren et al., 2004; Juavinett et al., 2018; Najafi and Churchland, 2018), usually including stimuli from multiple senses (Angelaki et al., 2009; Chandrasekaran, 2017). The acquired sensory information crucially depends on our actions — what we see, hear, and touch is influenced by our movements — a process known as active sensing (Schroeder et al., 2010; Yang et al., 2016b). For example, imagine attempting to cross the road on a rainy night. You need to interact with the environment, that is, turn your head and move your eyes, and process the incoming stimuli (e.g., car lights, slippery ground) to decide whether and when it is safe to do so. If you feel the road is slippery, you may need to monitor your steps and at the same time you may have to walk faster or step back if a car is approaching.

This example indicates that in real-world settings most perceptual decisions are made during active behaviors (Musall et al., 2019). The quality of the acquired evidence is driven by such active behaviors, which, in turn, affect the efficiency of the perceptual decisions that we make as a result of this active sensing process (Yang et al., 2016a; Gottlieb and Oudeyer, 2018). A first crucial element of fast and accurate perceptual decisions is the combination of evidence from different sensory streams (e.g., sight and touch) to form a unified percept and reduce uncertainty about the stimulus (Ernst and Banks, 2002). However, while there is extensive evidence that the integration of information from different sensory modalities improves perceptual choice accuracy (Lewis and Noppeney, 2010; Raposo et al., 2012) and response time (RT) (Drugowitsch et al., 2014), multisensory information processing has not been studied in an active scenario, where human participants are allowed to implement their own strategy for gathering evidence, as is the case in real-life settings.

Here we addressed this gap in the literature aiming to uncover the neural mechanisms underlying the formation of perceptual decisions via the active acquisition and processing of multisensory information. To achieve this, we capitalized on our previous work probing the neural correlates of active tactile decisions (Delis et al., 2018) and extended it to a multisensory setting that includes visual and haptic information presented simultaneously or separately. We hypothesized that the neural encoding of active sensory experience would be enhanced when multisensory information was available and that this neural multisensory gain would lead to improvements in decision-making performance.

An important aspect of our study is that the participants had full control of the evolution and duration of each trial. In other words, they could choose how much information to sample, where to sample this information from and for how long. Thus, we first aimed to characterize cortical coupling to continuous active sensing and then combined this with a popular sequential-sampling model of decision-making, the drift diffusion model (DDM) (Ratcliff and McKoon, 2008), to understand how the identified representations of active sensing behaviors influence decisions in the human brain. Here, to bridge the gap between active evidence acquisition and decision formation, we used the neural correlates of active (multi-)sensing to constrain the DDM.

Finally, to quantify cross-modal interactions in the brain, we applied a novel information-theoretic framework named partial information decomposition (PID) (Williams and Beer, 2010; Timme et al., 2014; Ince, 2017). PID quantifies the contribution of (1) each sensory modality and (2) cross-modal representational interactions (“redundant” or “synergistic”) to the multisensory neural representation (Park et al., 2018). Redundancy measures the similarity of the neural representation of the two modalities, while synergy indicates a better prediction of the neural response from both modalities simultaneously. Ultimately, this approach revealed the interactions between representations of different sensing modalities in the brain and shed light onto their role in decision-making behavior.

Materials and Methods

Experimental design and paradigm

Fourteen healthy right-handed participants (8 female, aged 24 ± 2 years) performed a two-alternative forced choice discrimination task during which they had to compare the amplitudes of two sinusoidal stimuli of the same frequency. All experimental procedures have been reviewed and approved by the Institutional Review Board at Columbia University.

To generate visual and tactile stimuli that can be actively sensed, we used a haptic device called a Pantograph (Campion et al., 2005), which can be controlled to generate the sensation of exploring real surfaces (Fig. 1A). The Pantograph is a two-dimensional force-feedback device, that is, (1) it produces a 2D tactile output and (2) it simultaneously measures 2D information about the finger position and applied force. Here we used its first property to generate stimulation and the second property to record the kinematics of the movements performed by the participants while they actively explored the presented stimuli. In particular, we split the workspace of the Pantograph (of dimensions 110 mm × 60 mm) into two subspaces (left [L] and right [R], 55 mm × 60 mm each) and generated continuous sinusoidal stimuli of different amplitudes (but same wavelength of 10 mm) in the two subspaces (Fig. 1B). Then, we instructed the participants to discriminate the amplitude of the two subspaces as quickly and as accurately as possible (1) using only visual (V) information, (2) using only haptic (H) information, and (3) combining the two sensory cues (VH). Crucially for our investigation here, participants were free to choose how to explore this virtual texture, that is, where and how fast to move their fingers and how long to explore each one of the two sides for before making their perceptual choice. Participants placed their right index finger on the interface plate of the Pantograph (Fig. 1A) and moved it freely to explore the textures of both subspaces (Fig. 1C) before reporting their choice (i.e., which amplitude is higher) by pressing one of two buttons on a keyboard (left or right arrow) using their left hand.

Specifically, in the H condition, the Pantograph produced sinusoidal forces of different intensity between L and R. When the participants placed their index fingers on the plate (interface) of the Pantograph, these forces at the interface had the effect of causing fingertip deformations and thus tactile sensations that resembled exploring real surfaces. Thus, when moving their finger on the Pantograph, participants had the sensation of touching a rough surface (with different amplitudes between L and R; see Fig. 1B, middle). In the V condition, stimuli matching the tactile stimuli were presented on a screen of the same dimensions. More precisely, amplitudes of the sinusoidal virtual texture in H were translated into contrast levels of sinusoidal gratings in V; that is, the participants were seeing black and white stripes of different intensity/contrast between L and R. Presentation of visual stimuli was generated using Psychtoolbox, and visual contrast varied between 0.5 and 1.5 around the default contrast value. The visual angle was 12 ± 6°. Stimulus presentation was controlled by a real-time hardware system (MATLAB XPCTarget) to minimize asynchrony, which was <1 ms. Importantly, to match the sense of touch, only the part of the workspace corresponding to the participant's finger location was revealed on the screen (i.e., a moving dot following the participant's finger; see Fig. 1B, left). Thus, in the V condition, grayscale visual textures (of different contrast between L and R) were shown wherever the participants moved their fingers while no forces were applied to the participants' fingers (i.e., no H stimulation). Hence, in both sensory domains, participants could only sense the presented stimulus via active exploration (i.e., finger movements on the x axis). Accordingly, in the VH condition, both the visual and haptic textures were congruently presented and sensed by the participants using finger movements (Fig. 1B, right). Overall, participants had to decide whether L or R had higher amplitude based on their haptic (in H trials), visual (in V trials), or visuo-haptic (in VH trials) perception of this virtual surface. Participants reported that they perceived the V and H signals as one stimulus in the VH condition.

The amplitude difference between L and R (representing the difficulty of the task) varied from trial to trial. On each trial, participants compared between the reference amplitude 1 (presented either on the left or right subspace) and 1 of 6 other amplitude levels (0.5, 0.75, 0.9, 1.1, 1.25, 1.5). Each trial was initiated by the participant. Trial onset was considered the time point at which horizontal finger velocity exceeded 0. Trial duration was determined by the participant and lasted for the whole period during which the participant made exploratory movements to sense the surface. The trial ended when the participant pressed the < or > key on the keyboard with their left hand to indicate their L or R choice. Each participant performed 20 trials for each amplitude level and for each sensory condition (V, H, VH), resulting in K = 20 trials × 6 amplitudes × 3 conditions = 360 trials in total. One participant showed poor behavioral performance (accuracy was not significantly different from chance level), and another participant's EEG recordings were significantly contaminated with eye movement artifacts; thus, data from these 2 participants were removed from any subsequent analyses. We report results from the remaining N = 12 participants. We also discarded trials in which participants did not respond within 10 s from trial onset or their RTs were shorter than 0.3 s. This resulted in the rejection of 4.9% of the trials.

Data recording and preprocessing

During performance of the task, we measured (1) the choice accuracy and RT of participants' responses, (2) movement kinematics (x, y coordinates of finger position recorded by the Pantograph) at a sampling frequency of 1000 Hz, and (3) EEG signals at 2048 sampling frequency using a Biosemi EEG system (ActiveTwo AD-box, 64 Ag-AgCl active electrodes, 10-10 montage).

To compare accuracies and RTs across sensory conditions, we used two-way ANOVAs with factors condition and stimulus difference followed by Bonferroni-corrected post hoc t tests. We also fit psychometric curves to the accuracy data of each participant using a cumulative Gaussian distribution and computed the point of subjective equality (PSE) and slope of the curve at the PSE.

Single-trial movement velocity waveforms were computed using the derivatives of the recorded position. EEG recordings were preprocessed using EEGLab (Delorme and Makeig, 2004) as follows. EEG signals were first downsampled to 1000 Hz to match movement kinematics and dynamics. Then, they were bandpass filtered to 1-50 Hz using a Hamming windowed FIR filter. To isolate the purely neural component of the EEG data, we used the following procedure: we first reduced the dimensionality of the EEG data by reconstituting the data using only the top 32 principal components derived from principal component analysis. Although we record from 64 channels, we expect our recordings to span a considerably lower-dimensional space (as a result of correlations, crosstalk, and common sources); thus, to enhance the ability of independent component analysis to identify truly independent components, we reduce the data dimensions to half using principal component analysis. Thereafter, an independent component analysis decomposition of the data was performed using the Infomax algorithm (Bell and Sejnowski, 1995). We then used an independent component analysis-based artifact removal algorithm called MARA (Winkler et al., 2011) to remove independent components attributed to blinks, horizontal eye movements, muscular activity (EMG), and any loose or highly noisy electrodes. MARA assigned each independent component a probability of being an artifact; we removed components with probabilities >0.5.

Decoding finger kinematics from EEG signals

To assess the neural encoding of the participants' active sensory experience in the three sensory conditions, we used a multivariate linear regression analysis introduced by Di Liberto et al. (2015) and shown in Equation 1 below. As in our previous work (Delis et al., 2018), we hypothesized that the sensorimotor strategy used by the participant can be represented by the velocity profiles of the participant's exploratory movements, which capture changes of movement direction as well as speed changes. Thus, as kinematic feature representing the active sensing behavior, we used 1 d finger velocity on the x axis (capturing L-R finger movements), but also finger position (on the x axis) yielded qualitatively very similar results. Finger movement in the y axis (which did not provide any sensory information) did not show any significant correlation with the EEG signals and was not considered further. We thus performed a multivariate ridge regression (Crosse et al., 2016) predicting the 1 d finger velocity (on the x axis) from the EEG data. Specifically, our decoding analysis aimed to reconstruct the movement velocity from a linear combination of the EEG recordings with time lags ranging between –200 and 400 ms with respect to the instantaneous velocity values. Specifically, we aimed to decode the velocity profile s(t) of the participants' scanning movements from the simultaneously recorded EEG signals m(i,t), as follows: ŝ(t)≅∑τ∑ig(τ,i)m(t + τ,i)(1) where ŝ(t) is the reconstructed finger velocity and, g(i, τ) is a filter that integrates information spatially across EEG channels i and temporally across time lags τ to decode the velocity profile from the EEG recordings. Here we used τ ∈ [–200 ms, 400 ms]; that is, we examined the EEG information about the finger velocity at time t from t –200 ms (200 ms earlier) up to t 400 ms (400 ms later). Varying these lags did not improve reconstruction performance and yielded qualitatively similar results with the main effects always in the [–200 ms, 400 ms] temporal window, so we used this window for all our further analyses. To learn the decoding filters and compute the velocity approximation accuracy (r2) between the original and the reconstructed velocity profiles, we used the multivariate temporal response function MATLAB Toolbox implementing regularized linear (ridge) regression (Crosse et al., 2016). In all our filter estimations, we used a cross-validation procedure. We first randomly split our data into two sets: a training set (80% of the trials) to learn the filters and a test set (the remaining 20% of the trials) to apply the filters to and compute the reported r2 values. In the training set, we performed fivefold cross-validation to identify the optimal value of the ridge parameter λ (varying λ = 2°, …, 220) that maximizes r2 between the estimated and the measured velocity. These investigations revealed that values of λ between 2° and 24 yielded almost identical r2 across all models; thus, we used λ = 22 for all models for consistency.

Since the weights of the decoding filters are not interpretable in terms of the neural origins of the underlying processes (Haufe et al., 2014), we transformed them into encoding filters f(τ,i) using the “forward model” formalism (Parra et al., 2002; Haufe et al., 2014), as follows: f(τ,i)=m(t,i)Tm(t,i)g(i,τ)ŝ(t)Tŝ(t)(2)

We then plotted the weights of the forward models f(τ,i) at specific time lags τ as scalp maps to visualize the relationship between sensorimotor behavior and neural activity in each one of the three sensory conditions (V, H, VH).

Statistical analysis of EEG-behavior couplings

To determine statistical significance of the learned EEG-velocity mappings, we randomized the phase spectrum of the EEG signals, which disrupted the temporal relationship between the EEG activity and the kinematics while preserving the autocorrelation structure of the signals (Theiler et al., 1992). We generated 1000 phase-randomized surrogates of the EEG data and computed correlations with the kinematics to define the null distribution from which we estimated p values. This phase-randomization procedure maintains the magnitude spectrum of the EEG signals, thus conserving their autocorrelation structure, which is a fundamental feature of the original signals when the significance of cross-correlation is assessed. Hence, using this procedure, the obtained surrogates that define the null distribution are a more plausible comparison (resulting in a stricter statistical test) than randomly shuffled surrogates.

Informed modeling of decision-making performance

Having characterized the cortical coupling to the sensorimotor strategies in the three sensory conditions, we then probed the relationship between the identified EEG-velocity couplings and decision-making performance. To provide this missing link between active sensing and decision formation, we implemented a hierarchical DDM (HDDM), a well-known cognitive model of decision-making behavior, and informed it with the results of our previous decoding analysis.

We fit the participants' decision-making performance (i.e., accuracy and RT) with an HDDM (Wabersich and Vandekerckhove, 2014), which assumes a stochastic accumulation of sensory evidence over time, toward one of two decision boundaries corresponding to correct and incorrect choices (Ratcliff and McKoon, 2008). The model returns estimates of internal components of processing, such as the rate of evidence accumulation (drift rate), the distance between decision boundaries controlling the amount of evidence required for a decision (decision boundary), a possible bias toward one of the two choices (starting point), and the duration of nondecision processes (nondecision time), which include stimulus encoding and response production. As per common practice, we assumed that stimulus differences affected the drift rate (Palmer et al., 2005).

In short, the model iteratively adjusts the above parameters to maximize the summed log likelihood of the predicted mean RT and accuracy. The DDM parameters were estimated in a hierarchical Bayesian framework, in which prior distributions of the model parameters were updated on the basis of the likelihood of the data given the model, to yield posterior distributions (Wiecki et al., 2013; Wabersich and Vandekerckhove, 2014). The use of Bayesian analysis, and specifically the HDDM, has several benefits relative to traditional DDM analysis. First, this framework supports the use of other variables as regressors of the model parameters to assess relations of the model parameters with other physiological or behavioral data (Frank et al., 2015; Turner et al., 2015; Nunez et al., 2017). This regression model, which is included in HDDM, allows estimation of trial-by-trial influences of a covariate (e.g., a brain measure) onto DDM parameters. In other words, trial-by-trial fluctuations of the estimated HDDM parameters can be approximated as a linear combination of other trial-by-trial measures of cognitive function (Wiecki et al., 2013; Forstmann et al., 2016). This property of the HDDM enabled us to establish the link between the results of the EEG-velocity coupling analysis and the decision parameters of the model, by using the EEG-velocity couplings as predictors of the HDDM parameters, as explained below (for an example of such a linear regression of the drift rate parameter, also see Eq. 3). Second, the model estimates posterior distributions of the main parameters (instead of deterministic values), which directly convey the uncertainty associated with parameter estimates (Kruschke, 2010). Third, as a result of the above, the hierarchical structure of the model allows estimation of the HDDM parameters across participants and conditions, thus yielding distributions at different levels of the model hierarchy (e.g., the population level and the participant level, respectively). In this way, the HDDM capitalizes on the statistical power offered by pooling data across participants (population-level parameters) but at the same time accounts for differences across participants (represented by the variance of the population-level distribution and the individual participant-level estimates). Fourth, the Bayesian hierarchical framework has been shown to be especially effective when the number of observations is low (Ratcliff and Childers, 2015).

To implement the hierarchical DDM, we used the JAGS Wiener module (Wabersich and Vandekerckhove, 2014) in JAGS (Plummer, 2003), via the Matjags interface in MATLAB to estimate posterior distributions. For each trial, the likelihood of accuracy and RT was assessed by providing the Wiener first-passage time distribution with the four model parameters (boundary separation, starting point, nondecision time, and drift rate). Capitalizing on the advantages of HDDM, we ran the model pooling data across all participants and conditions and estimated both population-level and participant-level distributions. Parameters were drawn from uniformly distributed priors and were estimated with noninformative mean and SD group priors. As per standard practice for accuracy-coded data, the starting point was set as the midpoint between the two decision boundaries as participants could not develop a bias toward correct or incorrect choices. For each model, we ran three separate Markov chains with 5500 samples of the posterior parameters each; the first 500 were discarded (as “burn-in”) and the rest were subsampled (“thinned”) by a factor of 50 following the conventional approach to MCMC sampling whereby initial samples are likely to be unreliable because of the selection of a random starting point and neighboring samples are likely to be highly correlated (Wabersich and Vandekerckhove, 2014). The remaining samples constituted the probability distributions of each estimated parameter. To ensure convergence of the chains, we computed the Gelman-Rubin R2 statistic (which compares within-chain and between-chain variance) and verified that all group-level parameters had an R2 close to 1 and always <1.01.

Here, to obtain a mechanistic account of the effect of EEG-velocity coupling on decision-making behavior, we incorporated the single-trial measures of these couplings (r2 values defined above) into the HDDM parameter estimation (see Fig. 3B). Specifically, as part of the model fitting within the HDDM framework, we used the single-trial velocity reconstruction accuracies r2 as regressors of the decision parameters to assess the relationship between trial-to-trial variations in EEG-velocity couplings and each model parameter. Furthermore, to characterize the effect of active sensing movements on decision formation, we also incorporated movement parameters in the HDDM framework. Specifically, we computed the following movement parameters: (1) the average finger velocity (vm) on each trial; (2) the number of crossings (ncr) between L and R, which is an indicator of the time it took participants to switch between the two stimuli; and (3) the time participants spent exploring one of the two stimuli (here we arbitrarily selected the low-amplitude stimulus on each trial, tlow) as an indicator of exploration time. To understand how these movement parameters affect the decision-making process and specifically whether they relate to (1) sensory processing and movement planning/execution (i.e., nondecision processes) and/or (2) evidence accumulation (i.e., decision processes) and/or (3) the speed-accuracy trade-off adopted by the participants, we used these parameters as regressors for nondecision time, drift rate, and decision boundary, as follows: τ=β0 + β1*r2+βv*vm + βsw*ncr + βexp*tlow(3) δ=γ0 + γ1*r2*s + γsw*ncr + γexp*tlow(4) α=θ0 + ϑ1*r2 +ϑv*vm + ϑsw*ncr + ϑexp*tlow(5) where τ, δ, and α represent the single-trial nondecision time, drift rate, and decision boundary, respectively. Velocity reconstruction accuracy r2, mean finger velocity vm, number of crossings ncr, and time spent exploring the lower-amplitude stimulus tlow are the single-trial predictor variables with regression coefficients βi, γi, and δi, respectively, and s = 0.1, 0.25, 0.5 is the stimulus difference on each trial k = 1,…,K of each participant n = 1,…, N. As per common practice, we modeled a linear relationship between drift rates and stimulus differences reflecting the dependence of the speed of information integration on the amount of evidence available (Palmer et al., 2005; Ratcliff and McKoon, 2008).

By using the above regression approach, we were able to test the influence of the above EEG and movement parameters on each of the HDDM parameters. Thus, we tested different models in which the single-trial values of the above parameters were used as predictors for all combinations of the HDDM parameters (drift rate, nondecision time, and decision boundary). To select the best-fitting model, we used the deviance information criterion (DIC), a measure widely used for fit assessment and comparison of hierarchical models (Spiegelhalter et al., 2002). DIC selects the model that achieves the best trade-off between goodness-of-fit and model complexity. Lower DIC values favor models with the highest likelihood and least degrees of freedom.

Statistical analysis of modeling results

Posterior probability densities of each regression coefficient were estimated using the sampling procedure described above. Significantly positive (negative) effects were determined when >95% of the posterior density was higher (lower) than 0. To take into account the hierarchical structure of the model which estimated both population-level distributions and participant-level distributions of the parameters, all statistical tests at the population level were performed by contrasting the group-level distributions (not the individual participant means) across sensory conditions. This hierarchical statistical testing has been shown to reduce biases and actually yield conservative effect sizes (Boehm et al., 2018).

PID

We then aimed to uncover whether the visual (V) and haptic (H) neural representations of active sensing contained the same information (redundancy) that is present in the multisensory (VH) representation or to what extent their contributions are distinct (unique information) or complementary (synergy). To achieve this, we used the PID (Williams and Beer, 2010; Timme et al., 2014) applied to the predictions of the finger velocity encoding models learned in the different experimental conditions. PID provides an information theoretic approach to compare the outputs of different predictive models that goes beyond simply comparing accuracy to determine whether the different models share or convey unique predictive information content (Daube et al., 2019b). PID extends the concept of co-information (McGill, 1954), which is defined as follows: I(VH;V;H)=I(VH;V) + I(VH;H)−I(VH;[V,H])(6) where I(X;Y) denotes the mutual information (MI) between variables X and Y. MI is a nonparametric measure of dependence between two variables which has the unique property that its effect size is additive (Shannon, 1948). Hence, co-information (also called interaction information when defined with opposite sign) quantifies the difference between the sum of the MI when each modality is considered alone and the MI when the two modalities are observed together (Park et al., 2018).

Positive values of this difference indicate that some information about the predictions of the multisensory VH model is shared between the predictions obtained from the models trained in the unisensory V and H conditions (i.e., there are common or redundant representations of finger velocity in both V and H conditions). Negative values of the interaction information indicate a super-additive or synergistic interaction between the predictions of the V and H models; that is, the two models provide more information about the multisensory (VH) prediction when observed together than would be expected from observing each individually. However, interaction information measures the net difference between synergy and redundancy in the system; thus, it is possible to have zero interaction information, even in the presence of redundant and synergistic interactions that cancel out in the net value (Williams and Beer, 2010; Ince, 2017). This occurs because classic Shannon quantities cannot separate redundant and synergistic contributions, which has led to a growing field developing PID measures to address this shortcoming.

To give a simple example of such a case, let us consider three variables, each consisting of two bits (i.e., binary (0/1) variables with p(0) = p(1) = 0.5). Let us also assume that the first bit is shared between all three variables and the second bit follows the XOR distribution across the three variables. In this case, there is clear redundancy and synergistic structure, but co-information/interaction information is zero (Griffith and Koch, 2014).

More precisely, PID addresses this methodological problem by decomposing MI into unique redundant and synergistic components, as follows: I(VH;V;H)=Iuni(VH;V) + Iuni(VH;H) + Ired(VH;V,H) + Isyn(VH;V,H)(7) where Iuni(VH;V) is the part of the VH model predictions that can be explained only from the V model predictions, Iuni(VH;H) is the part of the VH model predictions that can be explained only from the H model predictions, Ired(VH;V,H) is the part of the VH model predictions that is common (redundant) to both the V and H model predictions, and Isyn(VH;V,H) is the extra (synergistic) information about the VH model predictions that arises when both V and H predictions are considered together. PID decomposes the joint MI between two predictor signals (here the EEG activity predicted from an encoding model trained in the unisensory V, H conditions) and a target signal (here the EEG activity predicted from an encoding model trained in the multisensory VH condition) into four terms: redundancy, the unique information in each predictor, and synergy. Redundancy quantifies the information in the target signal that is shared between the two predictor signals. Synergy quantifies improvement in prediction of the target when both predictors are observed together and represents information about the target signal which cannot be obtained from the individual predictors separately.

To perform PID here, we used a recent implementation based on common change in surprisal for Gaussian variables (Ince, 2017), which has been shown to be effective when applied to neuroimaging data (Park et al., 2018; Daube et al., 2019a).

To implement the above approach on our data, we used the recordings of the VH condition where the two unisensory representations of active sensory experience could be directly compared with the multisensory representation. We took the velocity-encoding models obtained in each condition (V, H, VH) and applied them to the VH data (see Eq. 3) to obtain the V, H, and VH predictions of each EEG sensor activity for all VH trials. Since the unisensory models (V, H) were fit in the corresponding unisensory condition, they could only have learned a unisensory representation, whereas the VH model learned a multisensory representation of active sensing velocity. Thus, we applied PID for each participant separately to predict the VH model predictions from the two unisensory V and H model predictions, which enabled us to quantify the cross-modal interactions between the two unisensory representations across all EEG sensors.

Statistical analysis of PID results

We performed this decomposition independently for each EEG channel and obtained scalp maps for the four PID terms (redundant information, unique information of A, unique information of V, synergistic information) for each participant. To avoid overfitting, we implemented a fivefold cross-validation procedure. We randomly split the VH data into five subsets used four of them to learn the VH, V, and H models and the held-out set to perform the PID on. We repeated this process 5 times to obtain PID values for all the VH data. To assess statistical significance of the obtained values, we performed a permutation test. Specifically, we shuffled the target signal (i.e., the VH model of active sensing) 1000 times while keeping the two predictor signals (V and H models, respectively) unchanged and applied PID to predict the VH model surrogate data. Output values of the original PID decomposition were considered significant if they exceeded the 99th percentile of the distribution of the surrogate data. Multiple comparisons were corrected for using FDR (Genovese et al., 2002).

Results

We collected behavioral and EEG data while 14 participants actively interrogated a two-dimensional texture stimulus that differed in its amplitude in one dimension (L vs R). Participants used V, H, or VH to make a 2-alternative forced perceptual choice, that is, report (via a key press) as quickly and as accurately as possible on which side (L or R) the texture stimulus had higher amplitude (Fig. 1B). To sample information from both sides, participants performed finger movements scanning the workspace of the Pantograph before reaching a decision (Fig. 1C).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental design and behavioral results. A, The Pantograph is a haptic device used to render virtual surfaces that can be actively sensed. Top, The parts of the Pantograph shown from a lateral view. Participants placed their index finger on the interface plate. Bottom, The Pantograph device used in this experiment. B, The stimulus in the three sensory conditions. We programmed the Pantograph to generate a virtual grating texture. The workspace was split into two subspaces (L and R) that differed in the amplitude of the virtual surface that the participants actively sensed. One of the two sides (randomly assigned) had the reference amplitude (equal to 1), and the other had the comparison amplitude that varied on each trial taking one of the values: 0.5, 0.75, 0.9, 1.1, 1.25, and 1.5. Participants performed the task using V, H, or VH. Amplitude of the stimulus in the haptic domain (H) was translated as contrast in the visual domain (V). Crucially, to match the H condition, only a moving dot following the participant's finger was revealed on the screen in V. C, Index finger trajectory indicating the scanning pattern of the virtual texture in one trial. On this trial, the participant actively sensed the left subspace first, then moved to the right subspace and explored it before coming back to the left subspace again and reporting their choice. D, Psychometric curves indicating the percentage of nonreference choices for all three sensory conditions (blue represents V; green represents H; red represents VH) and for all stimulus differences. Large dots represent average percentage of choices across participants. Smaller dots represent individual participant means. Data are fit using cumulative Gaussian functions. E, Cumulative distributions (CDF) of RTs for all three sensory conditions (blue represents V; green represents H; red represents VH) across all trials of all participants. Thick lines indicate CDFs across all participant data. Thin lines indicate individual participant CDFs for each sensory condition.

In the H condition, the Pantograph (for more details on the device used to generate the stimuli, see Materials and Methods) was programmed to produce sinusoidal forces, which yielded the sensation of exploring a rough texture surface (with different amplitudes between L and R, when participants moved their index finger on the workspace of the Pantograph; see Fig. 1B, middle). In the visual domain, participants were moving their fingers to reveal grayscale stripes of different intensity/contrast between L and R (Fig. 1B, left). In the VH condition, both the visual and haptic textures were congruently presented wherever the participants moved their fingers (Fig. 1B, right). Overall, participants had to decide whether L or R had higher amplitude based on their haptic (in H trials), visual (in V trials), or visuo-haptic (in VH trials) perception of this virtual surface.

Multisensory gain in behavioral performance

Multisensory stimulation resulted in significantly higher discrimination accuracy (91.5 ± 2.1% in VH vs 85.8 ± 2.2% in V and 86.3 ± 2.2% in H, two-way ANOVA with factors condition and stimulus difference, F(2,99) = 5.64, p < 0.005, see also slopes in the corresponding psychometric curves in Fig. 1D, PSEv = 0.034 ± 0.013, PSEh = −0.001 ± 0.009 PSEvh = −0.019 ± 0.007, slopev = 2.397 ± 0.2964, slopeh = 1.826 ± 0.147, slopevh = 3.001 ± 0.2514) compared with the unisensory conditions (post hoc t tests, Bonferroni-corrected, p = 0.009 for V-VH and p = 0.019 for H-VH). RTs also reduced in VH (4.11 ± 0.30 s vs 4.41 ± 0.31 s in V and 4.25 ± 0.29 s in H, two-way ANOVA with factors condition and stimulus difference, F(2,99) = 3.19, p = 0.045, see also corresponding cumulative distribution functions in the three conditions, Fig. 1E). This result was significant at the population level for VH versus V differences (post hoc t test, p = 0.021, Bonferroni-corrected) but not VH versus H differences (post hoc t test, p = 0.066, Bonferroni-corrected) in RTs. As expected, we also found a main effect of stimulus differences, with accuracy increasing (F(2) = 91.82, p < 0.0001) and reaction times decreasing (F(2) = 4.56, p < 0.02) with larger stimulus differences, respectively. There was no interaction between the sensory condition and stimulus difference on either measure (accuracy: F(4) = 0.66, p = 0.62; reaction times: F(4) = 0.05, p = 0.99). Together, these results indicate that multisensory information increased decision-making performance.

Reconstruction of active sensing velocity from EEG recordings

We then aimed to establish a relationship between brain activity and the active sensory experience of the participants in each one of the three sensory conditions. To this end, we performed a multivariate ridge regression (Crosse et al., 2016) between the EEG data and the 1 d finger velocity data (on the x axis) to quantify neural encoding of sensorimotor behavior.

This analysis yielded the optimal linear combination of EEG channel activations with time lags ranging between –200 and 400 ms that approximated the measured movement velocities. We found that reconstruction accuracy r2 was above chance level in all sensory conditions (all p values < 0.01; Fig. 2B). To obtain interpretable topographies of the neural activity underlying these EEG-velocity couplings, we inverted the obtained velocity-decoding (backward) models into velocity-encoding (forward) models (Parra et al., 2005; Haufe et al., 2014). This revealed that centro-frontal locations (with positive weights) and occipital locations (with negative weights) contributed most to velocity reconstruction in the three sensory conditions with time lags ranging from 20 to 160 ms; Figure 2A shows the scalp topographies of the forward models, and Figure 2C, D shows the corresponding temporal response functions (averaged across frontal and occipital channels, respectively) in the three sensory conditions.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Results of velocity reconstruction analysis using EEG signals. A, Scalp topographies of the forward models representing neural encoding of instantaneous finger velocity for the three sensory conditions. The presented scalp maps show velocity-encoding EEG signals averaged over the following time windows: 20 and 120 ms lags between velocity and EEG for V and VH, and 60 and 160 ms lags for H. B, Accuracy of the velocity reconstruction from the EEG signals measured using the squared correlation coefficient (r2) between the original and the approximated velocity profile in the three sensory conditions (blue represents V; green represents H; red represents VH). Bars represent means across participants. Error bars indicate SEM. Dots represent individual participant data. C, D, Temporal response functions (TRFs) of the velocity-encoding EEG activity in the three sensory conditions (blue represents V; green represents H; red represents VH) averaged over frontal electrodes (in C) and over occipital electrodes (in D).

Impact of active multi-sensing on the quality of perceptual evidence

To characterize the relationship between the identified EEG-velocity couplings and decision-making performance, we used an HDDM. In brief, the HDDM decomposes task performance (i.e., accuracy and RT), into internal components of processing representing the rate of evidence integration (drift rate, δ), the amount of evidence required to make a choice (decision boundary separation, α), and the duration of other processes, such as stimulus encoding and response production (nondecision time, τ). Ultimately, by comparing the obtained values of all three core HDDM parameters across the V, H, and VH trials, we could associate any behavioral differences resulting from the deployment of multisensory information (more accurate and faster perceptual choices as in Fig. 1) to the constituent internal process reflected by each model parameter.

Here, to obtain a mechanistic account of the formation of perceptual decisions via the active sampling of (multi-)sensory information, we incorporated the single-trial measures of brain-sensing couplings (r2 values) into the HDDM parameter estimation (Fig. 3B). Specifically, we applied the obtained decoding filters to the single-trial EEG data and computed velocity reconstruction accuracies for each trial of each sensory condition (using a nested cross-validation process; for more details, see Materials and Methods). Then, as part of the HDDM fitting process, we integrated these single-trial r2 values in the HDDM framework by using them as regressors of the three core HDDM parameters (drift rate, nondecision time, and decision boundary; see Materials and Methods). The corresponding regression coefficients were estimated together with the HDDM parameters, thus enabling the assessment of the relationship between trial-to-trial variations in EEG-velocity couplings and each model parameter. We also used as regressors three movement parameters (average velocity vm, number of crossings between L and R ncr, and time spent on the lower amplitude stimulus tlow), which served to dissociate the effect of the exploratory movements (captured by these parameters) on decision formation from the effect of the neural encoding of these active sensing movements (captured by r2).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Informed modeling of decision-making behavior. A, Comparison of the best-fitting model (with r2 as a regressor of drift rate δ only and ncr, tlow as regressors of nondecision time τ only) with alternate models using the DIC. Positive ΔDIC (DICmodel – DICoptimal) values for all six models indicate that the model of choice achieved a better trade-off between goodness of fit and number of free parameters. B, Graphical representation showing hierarchical estimation of HDDM parameters. Round nodes represent continuous random variables. Double-bordered nodes represent variables defined in terms of other variables. Shaded nodes represent recorded or computed signals, that is, single-trial behavioral data (accuracy, RT, and stimulus differences, s), EEG-velocity couplings (r2), and kinematic parameters (ncr, tlow). Parameters are modeled as Gaussian random variables with inferred means μ and variances σ2. Plates denote that multiple random variables share the same parents and children. The outer plate is over sensory conditions (V, H, VH), and the inner plate is over all trials (K) and participants (N). C, Behavioral RT distributions are shown as histograms for each sensory condition (blue represents V; green represents H; red represents VH) for correct (right) and incorrect (left) trials together with the HDDM fits (black lines). Higher histogram values on the right indicate higher proportion of correct choices. D, Posterior distributions of regression coefficients (γ1) of the EEG-velocity couplings (r2), as predictors of the drift rate (δ) of the HDDM shown in A. The three colored curves indicate posterior distributions for the three sensory conditions (blue represents V; green represents H; red represents VH). E, Posterior distributions of decision boundaries for the three sensory conditions (blue represents V; green represents H; red represents VH). F, Cross-participant correlation of differences in choice accuracy (ΔAcc, x axis) and differences in β1 (Δβ1, y axis) between the multisensory (VH) and the two unisensory (V, H) conditions (yellow represents VH-V; purple represents VH-H). G, Posterior distributions of regression coefficients (βsw) of the number of crossings between L and R (ncr), as predictor of nondecision time (τ) of the HDDM shown in A. H, Posterior distributions of regression coefficients (βexp) of the time spent on the low-amplitude stimulus (tlow), as predictor of nondecision time (τ) of the HDDM shown in A. I, Cross-participant correlation of average RTs across trials and sensory conditions (x axis) and βexp (y axis).

We found that the best-fitting model (achieving the best complexity-approximation trade-off as evaluated by the DIC; Fig. 3A) was the one using r2 as regressor of the drift rate only and ncr, tlow as regressors of nondecision time only (Fig. 3B shows a graphical illustration of the best-fitting model, and Fig. 3C shows the model fitting of the accuracy and RT data where bars represent actual data and lines represent model fits). The means and CIs of the estimated values of the three core HDDM parameters are reported in Table 1. Crucially for our investigation here, the EEG-velocity couplings r2 were predictive of drift rates in single trials (regression coefficients β1 were larger than zero for all three sensory conditions, Prob(γ1 (V) > 0) > 0.97, Prob(γ1 (H) > 0) > 0.99, Prob(γ1 (VH) > 0) > 0.999; Fig. 3D). Furthermore, the contribution of r2 to drift rate was higher in VH trials compared with V and H trials (Prob(γ1 (VH) > γ1 (V)) > 0.95 and Prob(γ1 (VH) > γ1 (H)) > 0.99; Fig. 3D), indicating a multisensory enhancement of evidence accumulation rates via an increased weighting of the EEG-velocity couplings in the VH condition.

View this table:
  • View inline
  • View popup
Table 1.

Estimated values of the three core HDDM parameters for the best-fitting model

We then examined whether this multisensory gain could explain the observed improvements in behavioral performance when multisensory information is available. Indeed, this enhanced contribution of r2 to drift rate was predictive of multisensory improvements in behavioral performance. Specifically, cross-participant differences in γ1's across conditions correlated with the reported increases in accuracy (r = 0.58, p = 0.049 for VH vs V and r = 0.75, p = 0.005 for VH vs H; Fig. 3F), suggesting that differences in accuracies across participants were accounted for by the contributions of EEG-velocity couplings to evidence accumulation. Thus, participants with greater drift rate amplification achieved stronger enhancements in their behavioral performance as a result of multisensory information available.

We also found that both switching time between the two stimuli as captured by ncr and exploration time spent on one of the two stimuli as captured by tlow were predictive of nondecision time (Prob(βsw > 0) > 0.999, Prob(βexp > 0) > 0.999 for all V, H, VH; Fig. 3G,H) in single trials, indicating that nondecision processes (i.e., related to sensory processing and movement planning/execution) are dependent on switching and exploration times. There was a positive cross-participant correlation (r = 0.695, p = 0.0121) between βexp and RT (averaged across trials and sensory conditions), suggesting that participants with larger contributions of exploration time to their nondecision times took longer to respond (Fig. 3I). However, we found no reliable difference in the corresponding regression coefficients (βsw, βexp) between the three sensory conditions (Prob(βsw (VH) > βsw (V)) = 0.632, Prob(βsw (VH) > βsw (H)) = 0.843, Prob(βexp (VH) > βexp (V)) = 0.107, Prob(βexp (VH) > βexp (H)) = 0.210; Fig. 3G,H). There was also no difference in the decision boundaries in the three sensory conditions (Prob(α(VH) > α(V)) = 0.731, Prob(α(VH) > α(H)) > 0.804; Fig. 3E). These results indicate that neither the switching and exploration times nor the amount of evidence required to make a decision was dependent on the sensory condition.

Quantifying multisensory interactions

Having established that the neural encoding of the behavioral kinematics is related to the multisensory gain in decision evidence, we then aimed to assess how the neural representations of the two unisensory stimuli (V, H) interact to form a multisensory representation. To this end, we used PID, which enables the quantification of cross-modal representational interactions in the human brain (for details, see Materials and Methods). Specifically, the PID information theoretic framework quantifies the degree to which (1) each unisensory (V,H) representation contributes uniquely to the encoding of active sensing behavior (unique V or H information), (2) the two unisensory (V,H) representations share information about active sensing (redundancy), and (3) the two unisensory (V,H) representations convey more information when observed simultaneously (synergy). Here, we used PID to predict the forward (velocity-encoding) VH model (target signal) from the two unisensory forward models V and H (predictor signals). The decomposition revealed that the V model provided unique information in right parieto-temporal locations, whereas the H model contributed uniquely in left prefrontal and parieto-occipital locations (Fig. 4A; all p values < 0.01, FDR-corrected). Crucially, we also found multisensory interactions in the form of (1) redundant effects in left prefrontal and parieto-occipital electrodes and (2) synergistic effects over left centroparietal scalp (Fig. 4A; all p values < 0.01, FDR-corrected). Here, a redundant interaction means that the representation of velocity is common to both the V and H modalities (Ince et al., 2017; Park et al., 2018). A synergistic interaction means a better prediction of the modeled multisensory response can be made when considering both the V and the H representations together (rather than independently). That is, knowledge of the simultaneous combination of the EEG signal predicted by V and H models gives more information about the VH EEG signal.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Neural representations and cross-modal interactions. A, Results of PID applied to predict the multisensory (VH) model of active sensing from the two unisensory (V and H) models. Dots on the scalp topographies indicate the EEG channels that provide significant (p < 0.01, FDR-corrected) visual unique (top left), haptic unique (top right), redundant (bottom left), and synergistic (bottom right) neural information, respectively. B, Across-subject correlation between synergy in the two significant EEG channels (red represents CP3; blue represents C5) and choice accuracy in the VH condition.

Multisensory accuracy scales with synergistic interactions

Next, we investigated the behavioral relevance of the identified cross-modal interactions. In particular, we asked whether the identified synergistic representation of the two modalities was predictive of behavioral performance across participants. Indeed, we found a significant positive correlation (Pearson's R = 0.75 and 0.72, all p < 0.01) between synergy in both significant channels (CP3 and C5) and accuracy in VH, suggesting that participants with more synergistic representations at left centroparietal electrodes achieved better multisensory performance (Fig. 4B). This result suggests that synergy in contralateral centroparietal EEG signals modulates multisensory decision-making behavior. Because of small sample size, we cannot be sure this finding will generalize, but nonetheless report it as an interesting exploratory finding.

Discussion

In this work, we coupled neural decoding of continuous sensorimotor behavior with modeling of decision-making performance and a quantitative assessment of cross-modal neural interactions to understand how the human brain forms perceptual decisions via the active acquisition of multisensory evidence. We showed that the neural encoding of active sensing modulates the decision evidence regardless of the sensing modality. We further demonstrated that the simultaneous sensing of different modalities enhances this neural coupling and this enhancement drives the dynamics of active multisensory decisions. We finally dissected the neural information conveyed by cross-modal interactions and identified a potential neural mechanism supporting multisensory decisions.

Recent research on active sensing uncovered the strategies implemented by humans to sample sensory information (Yang et al., 2016b). Here we investigated this active sensing approach in a decision-making task using a computational approach that decodes the neural activity that encodes movement kinematics. Crucially, we made a first step in broadening this line of research to (1) include sensory information from multiple modalities and (2) reveal its neural underpinnings. These two developments enabled us to uncover the different sensory representations of active sampling behavior in the human brain.

To achieve this, we implemented an informed cognitive modeling approach that linked the neural correlates and the movement characteristics of active sensing behavior with the cognitive processes involved in decision-making. Specifically, we asked whether decision-making depends on the neural representations of active (multi-)sensing. To answer this question, we used a single-trial measure of the neural encoding of active sensing behavior as predictor of decision-making performance and found that, indeed, trial-to-trial fluctuations of the neural representations of active sensing are predictive of the rate of evidence accumulation for all three sensory conditions (V, H, VH). Crucially, we showed that the multisensory (VH) representation of active sensing was a stronger predictor of drift rate (Fig. 3D), thus offering a neural link between active multi-sensing and perceptual decision-making. We also split the motion profile into its two main components: (1) switching between the two alternative stimuli and (2) exploration within one particular stimulus and demonstrated that both components were predictive of the duration of nondecision processes (Fig. 3G,H), thus simply reflecting the time spent for movement planning and execution and the consequent acquisition and encoding of sensory information. These novel findings were only made possible by the use of an active multi-sensing paradigm in a decision-making task and the joint cognitive modeling of behavioral, neural, and sensorimotor signals.

We then capitalized on the identified neural representations of active (multi-sensing), to dissect cross-modal interactions in the human brain. To this end, we used PID, a recently developed rigorous methodology for the quantification of information conveyed uniquely or jointly by different neural representations (Williams and Beer, 2010; Timme et al., 2014; Ince, 2017). PID further distinguishes between two types of interactions between the neural representations of the two sensory modalities (V, H). A synergistic interaction indicates that a better prediction of the multisensory neural response can be made when the predicted values of the unimodal forward models for V and H are considered jointly rather than independently. Our results suggest that this synergistic interaction of the two neural representations correlates with multisensory behavioral performance (Fig. 4B). Instead, a redundant interaction indicates that the two unimodal models provide the same information about the multisensory condition; thus, the multisensory response there is common to both modalities (Park et al., 2018; Daube et al., 2019a). This suggests that the underlying neural signals reflect a modality-invariant representation.

As a result of this analysis, we were able to identify neural signals representing these two types of interactions. Specifically, we found that EEG channels in (parieto-)occipital and prefrontal areas carried redundant representations of the two sensory streams, perhaps reflecting supramodal coding mechanisms of active sensing (Fig. 4A, redundancy). This finding is in line with previous research assigning a multimodal role to occipital cortex (Lacey et al., 2007; Murray et al., 2016) and suggesting that multisensory enhancements originate from the sensory cortices (Kayser and Logothetis, 2007; Lakatos et al., 2007; Lewis and Noppeney, 2010). Specifically, recent research involved the visual cortex in audiovisual interactions (Mishra et al., 2007; Cao et al., 2019; Rohe et al., 2019) as well as tactile perception and visuo-haptic interactions (Lucan et al., 2010; Sathian, 2016; Gaglianese et al., 2020). In agreement with the above, here we also found unique H information in parieto-occipital electrodes. Concerning the PFC, recent evidence assigned to it a modality-general role in arbitrating between segregation or fusion of sensory evidence from different modalities (Cao et al., 2019). Thus, the involvement of the PFC in the regulation of adaptive multisensory behaviors in general (Koechlin and Summerfield, 2007; Donoso et al., 2014; Tomov et al., 2018) and perceptual decisions in particular (Heekeren et al., 2006; Philiastides et al., 2011; Rahnev et al., 2016; Sterzer, 2016) makes it a likely contributor to the formation of the most appropriate sensory representation that drives decision-making behavior. In other words, the PFC may support a mechanism gauging candidate (multisensory or unisensory) representations for selecting among multiple strategies to solve the task at hand (Calvert, 2001; Hein et al., 2007; Noppeney et al., 2010; Cao et al., 2019). Our active multi-sensing task requires participants to continuously weigh different sensing strategies and refine their scanning patterns to maximize information gain. Hence, the PFC may capitalize on multisensory information (when of benefit) to support such flexible behavior striking a balance between sampling more evidence and committing to a choice.

The above findings are consistent with our previous study focusing on the tactile modality, which attributed a sensory processing function to occipital cortex (specifically localized to the lateral occipital complex) and a decision formation function to right PFC (middle frontal gyrus) (Delis et al., 2018). Together with the current results, our findings suggest these two brain areas may play a cross-modal role in supporting active perception and decision-making. Overall, our work adds to the existing literature on multisensory interactions by quantifying how sensory representations interact to encode active sensing behaviors.

More importantly, here we revealed a novel functional role for contralateral centroparietal signals in active visuo-haptic decisions. We found that brain signals over left centroparietal scalp locations showed stronger encoding of active sensing when the two sensory streams were available (Fig. 4A, synergy), thus possibly representing a neural mechanism of multisensory integration. In line with the ongoing debate on the multisensory nature of primary sensory cortices (Ghazanfar and Schroeder, 2006; Liang et al., 2013), cross-modal visuo-haptic interactions leading to enhanced neural representations have been found in the primary somatosensory cortex (S1) (Zhou and Fuster, 2000; Dionne et al., 2010). Here we further characterized these interactions as carrying super-additive/synergistic representations of the active multisensory experience and demonstrated that they are related to the accuracy of active multisensory judgments.

It is also worth noting that our results do not rule out the possibility that other brain areas, not directly related to active sensing, may contribute to regulating the speed and accuracy of active multisensory decisions. Indeed, recent research breakthroughs have explained the development of multisensory representations from different sensory streams in the human brain (Aller and Noppeney, 2019; Cao et al., 2019; Rohe et al., 2019). Furthermore, recent studies have started to investigate how the interactions between sensory representations shape decision formation (Bizley et al., 2016; Franzen et al., 2020; Mercier and Cappe, 2020).

Our primary aim here was to provide the missing link between the active acquisition of multisensory evidence and its transformation to choice. Overall, our findings validated the hypotheses that (1) active sensing guides decision formation via evidence sampling and accumulation and (2) multisensory information spurs perceptual decisions by enhancing the neural encoding of active behaviors. Our information-theoretic analysis also revealed the neural substrates of multisensory interactions in the human brain that support active multisensory perception. Ultimately, we identified and characterized a set of human brain signals that underpin multisensory judgements by subserving an enhancement of the neural encoding of active perception when multisensory information is available.

Footnotes

  • This work was supported by European Commission H2020-MSCA-IF-2018/845884 “NeuCoDe” to I.D.; Physiological Society 2018 Research Grant Scheme to I.D.; National Institutes of Health Grant R01-MH085092 to P.S.; U.S. Army Research Laboratory W911NF-10-2-0022 to P.S.; Wellcome Trust 214120/Z/18/Z to R.A.A.I.; United Kingdom Economic and Social Research Council ES/L012995/1 to P.S.; and National Alliance for Research on Schizophrenia and Depression Young Investigator Award to Q.W.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Ioannis Delis at i.delis{at}leeds.ac.uk

SfN exclusive license.

References

  1. ↵
    1. Aller M,
    2. Noppeney U
    (2019) To integrate or not to integrate: temporal dynamics of hierarchical Bayesian causal inference. PLoS Biol 17:e3000210. doi:10.1371/journal.pbio.3000210 pmid:30939128
    OpenUrlCrossRefPubMed
  2. ↵
    1. Angelaki DE,
    2. Gu Y,
    3. DeAngelis GC
    (2009) Multisensory integration: psychophysics, neurophysiology, and computation. Curr Opin Neurobiol 19:452–458. doi:10.1016/j.conb.2009.06.008 pmid:19616425
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bell AJ,
    2. Sejnowski TJ
    (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Comput 7:1129–1159. doi:10.1162/neco.1995.7.6.1129 pmid:7584893
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bizley JK,
    2. Jones GP,
    3. Town SM
    (2016) Where are multisensory signals combined for perceptual decision-making? Curr Opin Neurobiol 40:31–37. doi:10.1016/j.conb.2016.06.003 pmid:27344253
    OpenUrlCrossRefPubMed
  5. ↵
    1. Boehm U,
    2. Marsman M,
    3. Matzke D,
    4. Wagenmakers EJ
    (2018) On the importance of avoiding shortcuts in applying cognitive models to hierarchical data. Behav Res Methods 50:1614–1631. doi:10.3758/s13428-018-1054-3 pmid:29949071
    OpenUrlCrossRefPubMed
  6. ↵
    1. Calvert GA
    (2001) Cross-modal processing in the human brain: insights from functional neuroimaging studies. Cereb Cortex 11:1110–1123. doi:10.1093/cercor/11.12.1110 pmid:11709482
    OpenUrlCrossRefPubMed
  7. ↵
    1. Campion G,
    2. Wang Q,
    3. Hayward V
    (2005) The Pantograph Mk-II: a haptic instrument. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1-4:723-728. doi:10.1109/IROS.2005.1545066
    OpenUrlCrossRef
  8. ↵
    1. Cao Y,
    2. Summerfield C,
    3. Park H,
    4. Giordano BL,
    5. Kayser C
    (2019) Causal inference in the multisensory brain. Neuron 102:1076–1087.e1078. doi:10.1016/j.neuron.2019.03.043 pmid:31047778
    OpenUrlCrossRefPubMed
  9. ↵
    1. Chandrasekaran C
    (2017) Computational principles and models of multisensory integration. Curr Opin Neurobiol 43:25–34. doi:10.1016/j.conb.2016.11.002 pmid:27918886
    OpenUrlCrossRefPubMed
  10. ↵
    1. Crosse MJ,
    2. Di Liberto GM,
    3. Bednar A,
    4. Lalor EC
    (2016) The Multivariate Temporal Response Function (mTRF) Toolbox: a MATLAB toolbox for relating neural signals to continuous stimuli. Front Hum Neurosci 10:604. doi:10.3389/fnhum.2016.00604
    OpenUrlCrossRefPubMed
  11. ↵
    1. Daube C,
    2. Ince RA,
    3. Gross J
    (2019a) Simple acoustic features can explain phoneme-based predictions of cortical responses to speech. Curr Biol 29:1924–1937.e1929. doi:10.1016/j.cub.2019.04.067 pmid:31130454
    OpenUrlCrossRefPubMed
  12. ↵
    1. Daube C,
    2. Giordano BL,
    3. Ince RA,
    4. Gross J
    (2019b) Quantitatively comparing predictive models with the partial information decomposition. 2019 Conference on Cognitive Computational Neuroscience, Berlin. doi:10.32470/CCN.2019.1142-0
    OpenUrlCrossRef
  13. ↵
    1. Delis I,
    2. Dmochowski JP,
    3. Sajda P,
    4. Wang Q
    (2018) Correlation of neural activity with behavioral kinematics reveals distinct sensory encoding and evidence accumulation processes during active tactile sensing. Neuroimage 175:12–21. doi:10.1016/j.neuroimage.2018.03.035 pmid:29580968
    OpenUrlCrossRefPubMed
  14. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21. doi:10.1016/j.jneumeth.2003.10.009 pmid:15102499
    OpenUrlCrossRefPubMed
  15. ↵
    1. Di Liberto GM,
    2. O'Sullivan JA,
    3. Lalor EC
    (2015) Low-frequency cortical entrainment to speech reflects phoneme-level processing. Curr Biol 25:2457–2465. doi:10.1016/j.cub.2015.08.030
    OpenUrlCrossRefPubMed
  16. ↵
    1. Dionne JK,
    2. Meehan SK,
    3. Legon W,
    4. Staines WR
    (2010) Cross-modal influences in somatosensory cortex: interaction of vision and touch. Hum Brain Mapp 31:14–25.
    OpenUrlPubMed
  17. ↵
    1. Donoso M,
    2. Collins AG,
    3. Koechlin E
    (2014) Human cognition: foundations of human reasoning in the prefrontal cortex. Science 344:1481–1486. doi:10.1126/science.1252254
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Drugowitsch J,
    2. DeAngelis GC,
    3. Klier EM,
    4. Angelaki DE,
    5. Pouget A
    (2014) Optimal multisensory decision-making in a reaction-time task. Elife 3:e03005. doi:10.7554/eLife.03005
    OpenUrlCrossRefPubMed
  19. ↵
    1. Ernst MO,
    2. Banks MS
    (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433. doi:10.1038/415429a
    OpenUrlCrossRefPubMed
  20. ↵
    1. Forstmann BU,
    2. Ratcliff R,
    3. Wagenmakers EJ
    (2016) Sequential sampling models in cognitive neuroscience: advantages, applications, and extensions. Annu Rev Psychol 67:641–666. doi:10.1146/annurev-psych-122414-033645
    OpenUrlCrossRefPubMed
  21. ↵
    1. Frank MJ,
    2. Gagne C,
    3. Nyhus E,
    4. Masters S,
    5. Wiecki TV,
    6. Cavanagh JF,
    7. Badre D
    (2015) fMRI and EEG predictors of dynamic decision parameters during human reinforcement learning. J Neurosci 35:485–494. doi:10.1523/JNEUROSCI.2036-14.2015
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Franzen L,
    2. Delis I,
    3. De Sousa G,
    4. Kayser C,
    5. Philiastides MG
    (2020) Auditory information enhances post-sensory visual evidence during rapid multisensory decision-making. Nat Commun 11:1–14. doi:10.1038/s41467-020-19306-7
    OpenUrlCrossRefPubMed
  23. ↵
    1. Gaglianese A,
    2. Branco MP,
    3. Groen II,
    4. Benson NC,
    5. Vansteensel MJ,
    6. Murray MM,
    7. Petridou N,
    8. Ramsey NF
    (2020) Electrocorticography evidence of tactile responses in visual cortices. Brain Topogr 33:559–570. doi:10.1007/s10548-020-00783-4
    OpenUrlCrossRef
  24. ↵
    1. Genovese CR,
    2. Lazar NA,
    3. Nichols T
    (2002) Thresholding of statistical maps in functional neuroimaging using the false discovery rate. Neuroimage 15:870–878. doi:10.1006/nimg.2001.1037
    OpenUrlCrossRefPubMed
  25. ↵
    1. Ghazanfar AA,
    2. Schroeder CE
    (2006) Is neocortex essentially multisensory? Trends Cogn Sci 10:278–285. doi:10.1016/j.tics.2006.04.008
    OpenUrlCrossRefPubMed
  26. ↵
    1. Gottlieb J,
    2. Oudeyer PY
    (2018) Towards a neuroscience of active sampling and curiosity. Nat Rev Neurosci 19:758–770. doi:10.1038/s41583-018-0078-0
    OpenUrlCrossRefPubMed
  27. ↵
    1. Griffith V,
    2. Koch C
    (2014) Quantifying synergistic mutual information. In: Guided self-organization: inception, emergence, complexity and computation. Berlin: Springer.
  28. ↵
    1. Haufe S,
    2. Meinecke F,
    3. Görgen K,
    4. Dähne S,
    5. Haynes JD,
    6. Blankertz B,
    7. Bießmann F
    (2014) On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87:96–110. doi:10.1016/j.neuroimage.2013.10.067
    OpenUrlCrossRefPubMed
  29. ↵
    1. Heekeren HR,
    2. Marrett S,
    3. Bandettini PA,
    4. Ungerleider LG
    (2004) A general mechanism for perceptual decision-making in the human brain. Nature 431:859–862. doi:10.1038/nature02966
    OpenUrlCrossRefPubMed
  30. ↵
    1. Heekeren HR,
    2. Marrett S,
    3. Ruff DA,
    4. Bandettini PA,
    5. Ungerleider LG
    (2006) Involvement of human left dorsolateral prefrontal cortex in perceptual decision making is independent of response modality. Proc Natl Acad Sci USA 103:10023–10028. doi:10.1073/pnas.0603949103
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Hein G,
    2. Doehrmann O,
    3. Muller NG,
    4. Kaiser J,
    5. Muckli L,
    6. Naumer MJ
    (2007) Object familiarity and semantic congruency modulate responses in cortical audiovisual integration areas. J Neurosci 27:7881–7887. doi:10.1523/JNEUROSCI.1740-07.2007
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Ince RA
    (2017) Measuring multivariate redundant information with pointwise common change in surprisal. Entropy 19:318. doi:10.3390/e19070318
    OpenUrlCrossRef
  33. ↵
    1. Ince RA,
    2. Giordano BL,
    3. Kayser C,
    4. Rousselet GA,
    5. Gross J,
    6. Schyns PG
    (2017) A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum Brain Mapp 38:1541–1573. doi:10.1002/hbm.23471
    OpenUrlCrossRefPubMed
  34. ↵
    1. Juavinett AL,
    2. Erlich JC,
    3. Churchland AK
    (2018) Decision-making behaviors: weighing ethology, complexity, and sensorimotor compatibility. Curr Opin Neurobiol 49:42–50. doi:10.1016/j.conb.2017.11.001
    OpenUrlCrossRefPubMed
  35. ↵
    1. Kayser C,
    2. Logothetis NK
    (2007) Do early sensory cortices integrate cross-modal information? Brain Struct Funct 212:121–132. doi:10.1007/s00429-007-0154-0
    OpenUrlCrossRefPubMed
  36. ↵
    1. Koechlin E,
    2. Summerfield C
    (2007) An information theoretical approach to prefrontal executive function. Trends Cogn Sci 11:229–235. doi:10.1016/j.tics.2007.04.005
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kruschke JK
    (2010) What to believe: Bayesian methods for data analysis. Trends Cogn Sci 14:293–300. doi:10.1016/j.tics.2010.05.001
    OpenUrlCrossRefPubMed
  38. ↵
    1. Lacey S,
    2. Campbell C,
    3. Sathian K
    (2007) Vision and touch: multiple or multisensory representations of objects? Perception 36:1513–1521. doi:10.1068/p5850
    OpenUrlCrossRefPubMed
  39. ↵
    1. Lakatos P,
    2. Chen CM,
    3. O'Connell MN,
    4. Mills A,
    5. Schroeder CE
    (2007) Neuronal oscillations and multisensory interaction in primary auditory cortex. Neuron 53:279–292. doi:10.1016/j.neuron.2006.12.011
    OpenUrlCrossRefPubMed
  40. ↵
    1. Lewis R,
    2. Noppeney U
    (2010) Audiovisual synchrony improves motion discrimination via enhanced connectivity between early visual and auditory areas. J Neurosci 30:12329–12339. doi:10.1523/JNEUROSCI.5745-09.2010
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Liang M,
    2. Mouraux A,
    3. Hu L,
    4. Iannetti GD
    (2013) Primary sensory cortices contain distinguishable spatial patterns of activity for each sense. Nat Commun 4:1979. doi:10.1038/ncomms2979
    OpenUrlCrossRefPubMed
  42. ↵
    1. Lucan JN,
    2. Foxe JJ,
    3. Gomez-Ramirez M,
    4. Sathian K,
    5. Molholm S
    (2010) Tactile shape discrimination recruits human lateral occipital complex during early perceptual processing. Hum Brain Mapp 31:1813–1821. doi:10.1002/hbm.20983
    OpenUrlCrossRefPubMed
  43. ↵
    1. McGill WJ
    (1954) Multivariate information transmission. Psychometrika 19:97–11. doi:10.1007/BF02289159
    OpenUrlCrossRef
  44. ↵
    1. Mercier MR,
    2. Cappe C
    (2020) The interplay between multisensory integration and perceptual decision making. Neuroimage 222:116970. doi:10.1016/j.neuroimage.2020.116970
    OpenUrlCrossRef
  45. ↵
    1. Mishra J,
    2. Martinez A,
    3. Sejnowski TJ,
    4. Hillyard SA
    (2007) Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion. J Neurosci 27:4120–4131. doi:10.1523/JNEUROSCI.4912-06.2007
    OpenUrlAbstract/FREE Full Text
  46. ↵
    1. Murray MM,
    2. Thelen A,
    3. Thut G,
    4. Romei V,
    5. Martuzzi R,
    6. Matusz PJ
    (2016) The multisensory function of the human primary visual cortex. Neuropsychologia 83:161–169. doi:10.1016/j.neuropsychologia.2015.08.011
    OpenUrlCrossRef
  47. ↵
    1. Musall S,
    2. Urai AE,
    3. Sussillo D,
    4. Churchland AK
    (2019) Harnessing behavioral diversity to understand neural computations for cognition. Curr Opin Neurobiol 58:229–238. doi:10.1016/j.conb.2019.09.011
    OpenUrlCrossRef
  48. ↵
    1. Najafi F,
    2. Churchland AK
    (2018) Perceptual decision-making: a field in the midst of a transformation. Neuron 100:453–462. doi:10.1016/j.neuron.2018.10.017
    OpenUrlCrossRefPubMed
  49. ↵
    1. Noppeney U,
    2. Ostwald D,
    3. Werner S
    (2010) Perceptual decisions formed by accumulation of audiovisual evidence in prefrontal cortex. J Neurosci 30:7434–7446. doi:10.1523/JNEUROSCI.0455-10.2010
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Nunez MD,
    2. Vandekerckhove J,
    3. Srinivasan R
    (2017) How attention influences perceptual decision making: single-trial EEG correlates of drift-diffusion model parameters. J Math Psychol 76:117–130. doi:10.1016/j.jmp.2016.03.003
    OpenUrlCrossRefPubMed
  51. ↵
    1. Palmer J,
    2. Huk AC,
    3. Shadlen MN
    (2005) The effect of stimulus strength on the speed and accuracy of a perceptual decision. J Vis 5:376–404.
    OpenUrlCrossRefPubMed
  52. ↵
    1. Park H,
    2. Ince RA,
    3. Schyns PG,
    4. Thut G,
    5. Gross J
    (2018) Representational interactions during audiovisual speech entrainment: redundancy in left posterior superior temporal gyrus and synergy in left motor cortex. PLoS Biol 16:e2006558. doi:10.1371/journal.pbio.2006558
    OpenUrlCrossRefPubMed
  53. ↵
    1. Parra L,
    2. Alvino C,
    3. Tang A,
    4. Pearlmutter B,
    5. Yeung N,
    6. Osman A,
    7. Sajda P
    (2002) Linear spatial integration for single-trial detection in encephalography. Neuroimage 17:223–230. doi:10.1006/nimg.2002.1212
    OpenUrlCrossRefPubMed
  54. ↵
    1. Parra L,
    2. Spence CD,
    3. Gerson AD,
    4. Sajda P
    (2005) Recipes for the linear analysis of EEG. Neuroimage 28:326–341. doi:10.1016/j.neuroimage.2005.05.032
    OpenUrlCrossRefPubMed
  55. ↵
    1. Philiastides MG,
    2. Auksztulewicz R,
    3. Heekeren HR,
    4. Blankenburg F
    (2011) Causal role of dorsolateral prefrontal cortex in human perceptual decision making. Curr Biol 21:980–983. doi:10.1016/j.cub.2011.04.034
    OpenUrlCrossRefPubMed
  56. ↵
    1. Plummer M
    (2003) JAGS: a program for analysis of Bayesian graphical models using Gibbs sampling. Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), Vienna, 20-22 March 2003.
  57. ↵
    1. Rahnev D,
    2. Nee DE,
    3. Riddle J,
    4. Larson AS,
    5. D'Esposito M
    (2016) Causal evidence for frontal cortex organization for perceptual decision making. Proc Natl Acad Sci USA 113:6059–6064. doi:10.1073/pnas.1522551113
    OpenUrlAbstract/FREE Full Text
  58. ↵
    1. Raposo D,
    2. Sheppard JP,
    3. Schrater PR,
    4. Churchland AK
    (2012) Multisensory decision-making in rats and humans. J Neurosci 32:3726–3735. doi:10.1523/JNEUROSCI.4998-11.2012
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. Ratcliff R,
    2. McKoon G
    (2008) The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput 20:873–922. doi:10.1162/neco.2008.12-06-420
    OpenUrlCrossRefPubMed
  60. ↵
    1. Ratcliff R,
    2. Childers R
    (2015) Individual differences and fitting methods for the two-choice diffusion model of decision making. Decision (Wash DC) 2:237–279. doi:10.1037/dec0000030
    OpenUrlCrossRef
  61. ↵
    1. Rohe T,
    2. Ehlis AC,
    3. Noppeney U
    (2019) The neural dynamics of hierarchical Bayesian causal inference in multisensory perception. Nat Commun 10:1907. doi:10.1038/s41467-019-09664-2
    OpenUrlCrossRefPubMed
  62. ↵
    1. Sathian K
    (2016) Analysis of haptic information in the cerebral cortex. J Neurophysiol 116:1795–1806. doi:10.1152/jn.00546.2015
    OpenUrlCrossRefPubMed
  63. ↵
    1. Schroeder CE,
    2. Wilson DA,
    3. Radman T,
    4. Scharfman H,
    5. Lakatos P
    (2010) Dynamics of active sensing and perceptual selection. Curr Opin Neurobiol 20:172–176. doi:10.1016/j.conb.2010.02.010
    OpenUrlCrossRefPubMed
  64. ↵
    1. Shannon CE
    (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–423. doi:10.1002/j.1538-7305.1948.tb00917.x
    OpenUrlCrossRef
  65. ↵
    1. Spiegelhalter DJ,
    2. Best NG,
    3. Carlin BR,
    4. van der Linde A
    (2002) Bayesian measures of model complexity and fit. J R Stat Soc B 64:583–616. doi:10.1111/1467-9868.00353
    OpenUrlCrossRef
  66. ↵
    1. Sterzer P
    (2016) Moving forward in perceptual decision making. Proc Natl Acad Sci USA 113:5771–5773. doi:10.1073/pnas.1605619113
    OpenUrlFREE Full Text
  67. ↵
    1. Theiler J,
    2. Eubank S,
    3. Longtin A,
    4. Galdrikian B,
    5. Farmer JD
    (1992) Testing for nonlinearity in time-series: the method of surrogate data. Physica D 58:77–94. doi:10.1016/0167-2789(92)90102-S
    OpenUrlCrossRefPubMed
  68. ↵
    1. Timme N,
    2. Alford W,
    3. Flecker B,
    4. Beggs JM
    (2014) Synergy, redundancy, and multivariate information measures: an experimentalist's perspective. J Comput Neurosci 36:119–140. doi:10.1007/s10827-013-0458-4
    OpenUrlCrossRefPubMed
  69. ↵
    1. Tomov MS,
    2. Dorfman HM,
    3. Gershman SJ
    (2018) Neural computations underlying causal structure learning. J Neurosci 38:7143–7157. doi:10.1523/JNEUROSCI.3336-17.2018
    OpenUrlAbstract/FREE Full Text
  70. ↵
    1. Turner BM,
    2. van Maanen L,
    3. Forstmann BU
    (2015) Informing cognitive abstractions through neuroimaging: the neural drift diffusion model. Psychol Rev 122:312–336. doi:10.1037/a0038894
    OpenUrlCrossRef
  71. ↵
    1. Wabersich D,
    2. Vandekerckhove J
    (2014) Extending JAGS: a tutorial on adding custom distributions to JAGS (with a diffusion model example). Behav Res 46:15–28. doi:10.3758/s13428-013-0369-3
    OpenUrlCrossRefPubMed
  72. ↵
    1. Wiecki TV,
    2. Sofer I,
    3. Frank MJ
    (2013) HDDM: hierarchical Bayesian estimation of the Drift-Diffusion Model in Python. Front Neuroinform 7:14.
    OpenUrlCrossRefPubMed
  73. ↵
    1. Williams PL,
    2. Beer RD
    (2010) Nonnegative decomposition of multivariate information. arXiv: 10042515v1.
  74. ↵
    1. Winkler I,
    2. Haufe S,
    3. Tangermann M
    (2011) Automatic classification of artifactual ICA-components for artifact removal in EEG signals. Behav Brain Funct 7:30. doi:10.1186/1744-9081-7-30
    OpenUrlCrossRefPubMed
  75. ↵
    1. Yang SC,
    2. Lengyel M,
    3. Wolpert DM
    (2016a) Active sensing in the categorization of visual patterns. Elife 5:e12215. doi:10.7554/eLife.12215
    OpenUrlCrossRefPubMed
  76. ↵
    1. Yang SC,
    2. Wolpert DM,
    3. Lengyel M
    (2016b) Theoretical perspectives on active sensing. Curr Opin Behav Sci 11:100–108. doi:10.1016/j.cobeha.2016.06.009
    OpenUrlCrossRef
  77. ↵
    1. Zhou YD,
    2. Fuster JM
    (2000) Visuo-tactile cross-modal associations in cortical somatosensory cells. Proc Natl Acad Sci USA 97:9777–9782. doi:10.1073/pnas.97.17.9777
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 42 (11)
Journal of Neuroscience
Vol. 42, Issue 11
16 Mar 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction
Ioannis Delis, Robin A.A. Ince, Paul Sajda, Qi Wang
Journal of Neuroscience 16 March 2022, 42 (11) 2344-2355; DOI: 10.1523/JNEUROSCI.0861-21.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction
Ioannis Delis, Robin A.A. Ince, Paul Sajda, Qi Wang
Journal of Neuroscience 16 March 2022, 42 (11) 2344-2355; DOI: 10.1523/JNEUROSCI.0861-21.2022
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • active sensing
  • drift diffusion model
  • EEG
  • multisensory processing
  • partial information decomposition
  • perceptual decision-making

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Contrast sensitivity of ON and OFF human retinal pathways in myopia
  • Hippocampal engrams generate variable behavioral responses and brain-wide network states
  • Decomposing neural circuit function into information processing primitives
Show more Research Articles

Behavioral/Cognitive

  • Contrast sensitivity of ON and OFF human retinal pathways in myopia
  • Hippocampal engrams generate variable behavioral responses and brain-wide network states
  • Decomposing neural circuit function into information processing primitives
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.