Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Featured ArticleResearch Articles, Behavioral/Cognitive

EEG of the Dancing Brain: Decoding Sensory, Motor, and Social Processes during Dyadic Dance

Félix Bigand, Roberta Bianco, Sara F. Abalde, Trinh Nguyen and Giacomo Novembre
Journal of Neuroscience 21 May 2025, 45 (21) e2372242025; https://doi.org/10.1523/JNEUROSCI.2372-24.2025
Félix Bigand
1Neuroscience of Perception & Action Lab, Italian Institute of Technology, Rome 00161, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Félix Bigand
Roberta Bianco
1Neuroscience of Perception & Action Lab, Italian Institute of Technology, Rome 00161, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Roberta Bianco
Sara F. Abalde
1Neuroscience of Perception & Action Lab, Italian Institute of Technology, Rome 00161, Italy
2The Open University Affiliated Research Centre at Istituto Italiano di Tecnologia (ARC@IIT), Genova 16163, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Sara F. Abalde
Trinh Nguyen
1Neuroscience of Perception & Action Lab, Italian Institute of Technology, Rome 00161, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Trinh Nguyen
Giacomo Novembre
1Neuroscience of Perception & Action Lab, Italian Institute of Technology, Rome 00161, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Giacomo Novembre
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

Real-world social cognition requires processing and adapting to multiple dynamic information streams. Interpreting neural activity in such ecological conditions remains a key challenge for neuroscience. This study leverages advancements in denoising techniques and multivariate modeling to extract interpretable EEG signals from pairs of (male and/or female) participants engaged in spontaneous dyadic dance. Using multivariate temporal response functions (mTRFs), we investigated how music acoustics, self-generated kinematics, other-generated kinematics, and social coordination uniquely contributed to EEG activity. Electromyogram recordings from ocular, face, and neck muscles were also modeled to control for artifacts. The mTRFs effectively disentangled neural signals associated with four processes: (I) auditory tracking of music, (II) control of self-generated movements, (III) visual monitoring of partner movements, and (IV) visual tracking of social coordination. We show that the first three neural signals are driven by event-related potentials: the P50-N100-P200 triggered by acoustic events, the central lateralized movement-related cortical potentials triggered by movement initiation, and the occipital N170 triggered by movement observation. Notably, the (previously unknown) neural marker of social coordination encodes the spatiotemporal alignment between dancers, surpassing the encoding of self- or partner-related kinematics taken alone. This marker emerges when partners can see each other, exhibits a topographical distribution over occipital areas, and is specifically driven by movement observation rather than initiation. Using data-driven kinematic decomposition, we further show that vertical bounce movements best drive observers’ EEG activity. These findings highlight the potential of real-world neuroimaging, combined with multivariate modeling, to uncover the mechanisms underlying complex yet natural social behaviors.

  • dance
  • electroencephalography (EEG)
  • full-body kinematics
  • multivariate modeling
  • real-world behavior
  • sensorimotor processing
  • social coordination
  • spontaneous movement
  • temporal response function (TRF)

Significance Statement

Real-world brain function involves integrating multiple information streams simultaneously. However, due to a shortfall of computational methods, laboratory-based neuroscience often examines neural processes in isolation. Using multivariate modeling of EEG data from pairs of participants freely dancing to music, we demonstrate that it is possible to tease apart physiologically established neural processes associated with music perception, motor control, and observation of a partner's movement. Crucially, we identify a previously unknown neural marker of social coordination that encodes the spatiotemporal alignment between dancers, beyond self- or partner-related kinematics alone. These findings highlight the potential of computational neuroscience to uncover the biological mechanisms underlying real-world motor and social behaviors, advancing our understanding of how the brain supports dynamic and interactive activities.

Introduction

A central challenge in neuroscience is understanding how the brain supports natural behavior in real-world contexts. Neuroimaging studies have traditionally been limited by bulky, motion-sensitive equipment, restricting research to controlled, motionless behaviors. This approach fails to capture how the brain manages the dynamic, multifaceted demands of everyday life, where cognition involves simultaneous neural processes, unconstrained movement, and interaction with ever-changing sensory environments—factors that traditional lab studies are poorly equipped to address (Stangl et al., 2023). Despite the recent advancements in mobile neuroimaging techniques (Niso et al., 2023) and algorithms for removing motion artifacts (Kothe and Jung, 2016), studying brain activity during natural behavior remains underexploited. As a result, it remains unclear how neural processes identified in lab-controlled studies generalize to real-world experiences, limiting our ability to interpret neural signals recorded during free behavior.

Here we used human collective dance as a model to study the neural basis of real-world interactions. We reason that dance offers an ideal testbed for several reasons: (1) it is culturally ubiquitous, hence broadly generalizable (Mithen, 2006; Dunbar, 2012); (2) it is complex yet controllable through musical structure (D’Ausilio et al., 2015); and (3) it encapsulates several intertwined neural processes, including auditory tracking of music, movement control, monitoring others’ movements, and integrating these signals into cohesive experiences (Foster Vander Elst et al., 2023). These processes—notably targeting a variety of sensory and motor systems—can be effectively measured, for example, using electroencephalography (EEG). Yet, the main analytical challenge lies in disentangling these simultaneous neural signals (capturing sensory, motor, and social functions) from each other and from artifactual signals.

We tackled this challenge using multivariate temporal response functions (mTRFs), a computational approach that models the influence of different input variables on neural activity (Lalor et al., 2009; Crosse et al., 2016). We applied this method to a dataset of 80 participants, forming 40 dyads, who danced spontaneously to music while their brain activity, muscle activity, and full-body movements were recorded. Specifically, we captured EEG (64 channels), 3D full-body kinematics (22 markers), electrooculography (EOG), and electromyography (EMG, from neck and facial muscles), across various experimental conditions—detailed below (Bigand et al., 2024). mTRFs were meant to isolate four concurrent neural processes: (1) auditory perception of music, (2) motor control of specific body parts or specific movements, (3) visual perception of a partner's body movements, and (4) visual tracking of social coordination, defined as the spatiotemporal alignment of movements between dancers, whether in-phase or anti-phase. Importantly, EOG and EMG signals were included as model predictors to account for potential muscle artifacts affecting the neural data. Additionally, we used event-related potential (ERP) analyses to anchor our findings in established physiological markers of sensory (auditory and visual evoked potentials) and motor (movement-related cortical potentials) processes (Deecke et al., 1969; Bach and Ullrich, 1997; Novembre et al., 2018).

Previous studies have used mTRFs to extract neural tracking of ecological auditory and visual stimuli, such as speech, music, or films (Di Liberto et al., 2015, 2020; O’Sullivan et al., 2017; Fiedler et al., 2019; Jessen et al., 2019; Bianco et al., 2024; Desai et al., 2024). However, aside from one human study and recent animal research incorporating body kinematics (Musall et al., 2019; Stringer et al., 2019; Di Liberto et al., 2021; Mao et al., 2021; Tremblay et al., 2023; Lanzarini et al., 2025), human studies that concurrently examine both sensory and motor processes using mTRFs—particularly in naturalistic behaviors—remain scarce. Furthermore, to our knowledge, no study has explicitly modeled social processes or addressed the neural activity associated with body movement artifact leakage, as we do here. As such, our holistic approach aims to demonstrate that naturalistic human behaviors—implying real-time adaptation and movement—can be effectively explored using traditional electrophysiology. Therefore, our study highlights the potential of advanced neural analysis techniques to bridge the gap between lab-controlled and real-world neuroimaging research, enhancing our understanding of the neural basis of natural human behavior.

Materials and Methods

The EEG, EOG, EMG, and kinematic data analyzed here were collected as part of a previous study (Bigand et al., 2024), where participant dyads engaged in spontaneous dance under a 2 × 2 experimental design (Fig. 1a,b). The manipulated within-dyad factors were musical input [whether participants danced to the same (synchronous) or different (asynchronous) music] and visual contact (whether participants could see or not see their dancing partner).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental materials and methods. a, Experimental setup. We applied the mTRF method to a previously collected dataset (Bigand et al., 2024) for which dyads of participants danced spontaneously in response to music while we recorded electroencephalography (EEG, 64 channels), electrooculography (EOG), electromyography (EMG, from neck and face muscles), and 3D full-body kinematics (22 markers). b, Experimental design. Data were collected under the experimental conditions of the original study, which utilized a 2 × 2 factorial design. The two manipulated factors were musical input (whether participants listened to the same or different music presented through earphones) and visual contact (whether participants could see or not see each other). c, Overview of the modeling paradigm. We estimated multivariate temporal response functions (mTRFs), which learned the optimal linear mapping between the set of variables of interest [here music, self- and other-generated movements, social coordination, as well as other control variables (data not shown) such as ocular, facial and neck muscle activity] and the EEG data. d, Model comparisons. To assess the unique contribution of each variable (regressor) to the EEG data, we trained reduced models encompassing all variables apart from the specified one. The difference in prediction accuracy between the reduced and full model (encompassing all variables), denoted Δr, yields the unique contribution of this variable.

Participants

Eighty participants (54 females; mean age, 26.15 years; SD, 6.43 years; 74 right-handed) formed 40 dyads (52% female–male, 41% female–female, and 7% male–male). To minimize interindividual variability while maximizing generalizability, we recruited only laypersons (i.e., individuals without formal dance training). All participants forming a dyad were familiar with each other and were informed about the social nature of the task during recruitment when they received the following message (translated from Italian): “You will have to come with someone you know (friend, family member, colleague…) with whom you will dance while listening to music (almost) like in a disco!”. As a measure of participants’ inclination toward social dance, we present the results of a post hoc questionnaire. Specifically, participants rated the statement “How often do you dance to music” with a mean score of 4.363 (SD = 1.052) on a 6-point Likert scale (1, Never, to 6, Very frequently), and rated the statements “When I am at a party, I am likely to be one of the first people starting to dance” and “I do not worry about what other people think of my dancing skills” with mean scores of 3.863 (SD = 1.626) and 3.863 (SD = 1.349), respectively, on a 6-point Likert scale (1, Strongly disagree, to 6, Strongly agree). Participants had normal or corrected-to-normal vision, normal hearing, and no history of neurological disorders. Data from five dyads were excluded due to recording failure in the motion capture system, leaving data from 70 participants for the analysis. All participants provided written informed consent to participate in the study and were compensated €25 for their participation. All experimental procedures were approved by “Comitato Etico Regionale della Liguria” (794/2021 - DB id 12093) and were carried out under the principles of the revised Helsinki Declaration.

Musical stimuli

The musical stimuli consisted of eight songs with an average duration of 39.8 s (standard deviation: 1.95 s). Each song was presented in all four experimental conditions (see below, Experimental design and procedure), resulting in a total of 32 trials. These songs were remakes of famous song refrains from electronic dance music and disco-funk genres (Bigand et al., 2024). Each song was adapted using the same four musical instruments: drums, bass, keyboards, and violin (the latter providing the vocal melody). All stimuli followed a 4/4 meter and spanned 20 bars. To create these adaptations, author FB and a professional composer (Raoul Tchoï) transcribed the original 4-bar refrain loops into MIDI format and synthesized them using MIDI instruments in Logic Pro X (Apple). The rearranged songs were then systematically structured by repeating the 4-bar loops five times and sequentially adding each instrument to the musical scene, in the following order: (1) drums, (2) bass, (3) keyboards, (4) voice, and (5) voice bis (i.e., the loop with full instruments was repeated twice). Loudness level across songs was controlled within a range of 1.5 LUFS (a measure accounting for the frequency sensitivity of the human auditory system). The songs were presented to the two participants forming each dyad through two separate EEG-compatible earphones (Etymotic ER3C), each connected to a distinct output channel of an audio interface (RME Fireface UC).

Every trial consisted of one song flanked with a fast-rising tone (rise time, 5 ms; fall time, 30 ms; frequency, 494 Hz; duration, 350 ms), preceded by 8 s of silence and followed by 7 s of silence, following this pattern: beep-silence-song-silence-beep. Trials were controlled using Presentation software (Neurobehavioral Systems), with synchronization between song presentation, EEG, and motion capture recordings achieved via TTL pulses. A TTL pulse was sent at the start of each trial from Presentation to both the EEG system (BioSemi ActiveTwo) and the motion capture system (Vicon; Lock+). This pulse activated the motion capture system, initiating recording, which automatically stopped after 1 min. Simultaneously, the TTL pulse was stored alongside the continuous EEG recordings, which remained uninterrupted throughout the experiment. The TTL pulse, whose value varied based on the trial condition, enabled us to epoch the EEG data and retrieve the corresponding trial condition for analysis.

Experimental design and procedure

EEG and kinematic data were recorded across four conditions derived from a 2 × 2 experimental design with visual contact (Yes, No) and musical input (Same, Different) as within-dyad factors. Conditions with or without visual contact were defined by the presence or absence of a curtain between the two participants in each dyad. Musical input was manipulated by presenting either identical or different songs to the participants through earphones. Because each song had a different tempo, the songs played simultaneously to the two participants were either perfectly synchronized (in the same-music condition) or slightly out of sync (in the different-music condition). To minimize intertrial variability, this degree of asynchrony was maintained constant across trials belonging to the different-music condition (i.e., relative tempo difference between the two songs was precisely 8.5%). This was achieved by presenting participants with songs from different genres during the different-music condition (the tempo associated with electronic dance music songs was on average faster than that of disco-funk songs; Bigand et al., 2024). Trials were organized into four blocks, with each block including eight trials (two trials per condition, each trial featuring a different song). The presentation order of the blocks—and trials within blocks—was randomized, except for the deliberate presentation of subsequent pairs of yes-vision or no-vision trials to minimize the displacement of the curtain.

Before the experiment began, participants were told to behave as in a “silent disco,” in which they should face each other, enjoy the music, and remain still during periods of silence before and after the music. To enhance participants’ comfort, the overhead lighting was dimmed using alternating red and blue colored filters, creating a softer, “disco-like” atmosphere. Additionally, the experimenter remained out of sight in a custom-built cabin (enclosed by 1.5-m-high panels), ensuring mutual invisibility between the experimenter and participants, as well as concealing the acquisition computers. Participants completed two training trials using songs not included in the main experiment to familiarize themselves with the task and setting. During this phase, they could request volume adjustments to their earphones, which were instructed to be set “as loud as possible without discomfort.” Participants were allowed (but not required) to dance freely within their designated space, keeping their head orientation toward their partner as steady as possible. Speaking or singing during trials was prohibited. Throughout the experiment, participants stood facing each other, each positioned within a marked area of 0.5 m × 0.7 m, with a separation of 2.5 m between them.

Kinematics data acquisition and preprocessing

3D full-body kinematics were recorded using wearable markers (22 per participant; size, 14 mm). Markers were placed on specific body parts, denoted as follows (L, left; R, right; F, front; B, back): (1) LB Head, (2) LF Head, (3) RF Head, (4) RB Head, (5) Sternum, (6) L Shoulder, (7) R Shoulder, (8) L Elbow, (9) L Wrist, (10) L Hand, (11) R Elbow, (12) R Wrist, (13) R Hand, (14) Pelvis, (15) L Hip, (16) R Hip, (17) L Knee, (18) L Ankle, (19) L Foot, (20) R Knee, (21) R Ankle, and (22) R Foot (Fig. 1a). Additionally, one supplementary marker was placed asymmetrically on either the left or right thigh of each participant. This marker was only used to facilitate Nexus software in the distinction between participants and was not considered in subsequent analyses. Eight optical motion capture cameras (Vicon system) recorded the markers’ trajectories at a sampling rate of 250 Hz. The cameras were positioned to capture the participants from various angles, ensuring that each participant was visible to at least six cameras even when visual contact was obstructed by the curtain. A high-definition video camera, synchronized with all the optical motion capture cameras, recorded the scene from an aerial view (Vicon Vue; 25 Hz sampling frequency; 1,920 × 1,080 pixels). We used a Vicon motion capture system to record full-body 3D positions with high spatial (<1 mm precision) and temporal (250 Hz) resolution. While alternative methods, such as inertial measurement units or accelerometers, could be considered, the feasibility of repeating our study with fewer markers remains to be tested. Notably, full-body tracking was essential here for breaking down complex dance kinematics into the elementary movement components that drove neural signals (see below, Kinematic feature selection).

Markers’ trajectories were corrected for swaps or mislabels via the Nexus manual labeling tool (Vicon). Then, automated correction of frequent and systematic marker swaps was performed using custom Python code. Any gaps in the marker trajectories were then filled using the automatic gap-filling pipeline in Nexus. The proportion of time with gaps, calculated for each marker and averaged across participants, ranged from a minimum of 0.128% (L Foot) to a maximum of 2.767% (R Hip), with a mean of 0.688% and a standard deviation of 0.739%. Lastly, all trajectories were inspected visually within Nexus software and manually adjusted if they did not match the aerial-view video recording. Subsequent data analyses were carried out in Python using custom code. Marker trajectories comprised 3D positions (along x, y, and z axes) corresponding to each of the 22 body parts, resulting in time series of posture vectors of 66 dimensions.

EEG data acquisition and preprocessing

We recorded neural activity from both participants simultaneously using a dual-EEG setup with the BioSemi ActiveTwo system. This setup consists of two AD-Boxes, each independently recording and referencing EEG from a single participant. The data from the two AD-Boxes are synchronized at the hardware level: the “slave” AD-Box transmits data via optical fiber to the “master” AD-Box, which then relays all EEG signals and triggers information to the acquisition computer. For a detailed schematic of the BioSemi ActiveTwo dual-EEG configuration, see Barraza et al. (2019). For each participant, the EEG was recorded from 64 Ag/AgCl active electrodes (placed on the scalp according to the extended international 10–10 system). To help retain the naturalistic nature of the study, we used 2-m-long cables, custom-built by the manufacturer to meet our specific requirements. Each EEG amplifier was positioned behind the participant at hip height, with cables taped to the upper back to minimize weight while ensuring they remained loose enough to prevent any perceived constraint or pulling. This setup allowed participants to move relatively freely while remaining within their designated area (see above, Experimental design and procedure).

EEG signals were digitized at 1,024 Hz using the BioSemi Active Two system. Subsequently, the data were preprocessed and analyzed using Matlab R2022. Measuring EEG from moving participants is susceptible to muscular artifacts in the recordings. To mitigate this issue, we preprocessed the EEG data of the dancing participants using a fully data-driven pipeline that we had previously developed for analyzing EEG data in awake monkeys (Bianco et al., 2024). This pipeline primarily utilizes open-source algorithms from Fieldtrip (Oostenveld et al., 2011) and EEGLAB (Delorme and Makeig, 2004) toolboxes. EEG signals were digitally filtered between 1 and 8 Hz (Butterworth filters, order 3), downsampled to 100 Hz, and trimmed according to the duration of the trial-specific songs. Faulty or noisy electrodes were provisionally discarded before rereferencing the data using a common average reference. This was done to prevent the leakage of noise to all electrodes during rereferencing. Criteria for flagging faulty or noisy electrodes included prolonged flat lines (lasting >5 s), abnormal interchannel correlation (lower than 0.8), or deviations in amplitude metrics from the scalp average (mean, SD, or peak-to-peak values exceeding 3 SD from the scalp average). These assessments were made using EEGlab's clean_flatlines and clean_channels functions (Delorme and Makeig, 2004) and custom Matlab code. To remove movement artifacts, we further denoised the rereferenced data using a validated algorithm for automatic artifact correction: Artifact Subspace Reconstruction (ASR, threshold value 5; Kothe and Jung, 2016). This algorithm has been previously applied to human data, including in music-making and dance studies (Ramírez-Moreno et al., 2023; Theofanopoulou et al., 2024). Finally, eye-movement artifacts were subtracted from the ASR-cleaned data using another automatic artifact correction algorithm (ICA), using EEGlab's IClabel function (Pion-Tonachini et al., 2019). Independent components that were classified by IClabel as eye-movement artifacts (i.e., those for which the “eye” category had the highest probability, with no minimum threshold) were removed. At this stage, noisy or faulty electrodes (as assessed at the start of this preprocessing pipeline) were interpolated by replacing their voltage with the average voltage of the neighboring electrodes (20 mm distance).

EOG and EMG data acquisition and preprocessing

Two EOG channels were recorded using surface Ag–AgCl electrodes from all participants. Electrodes were attached using disposable adhesive disks at specific anatomical locations: the left and right outer canthi. Additionally, we also recorded four EMG signals from the cheeks (the left and right zygomata) and the neck (the left and right paraspinal muscles) for control purposes. EOG/EMG signals were digitized at 1,024 Hz using the BioSemi Active Two system. The EOG/EMG data were filtered, downsampled, and trimmed similarly as the EEG data, rereferenced using scalp average, and ASR cleaned using a threshold value of 5, to maintain consistency with the EEG signals from scalp channels. It should be noted that these signals were measured from a subset of participants (n = 58); therefore, all subsequent analyses involving this data subset include only these participants.

Multivariate temporal response functions

Events, such as hearing a fast-rising sound or initiating a movement, elicit phase-locked brain activity within a specific time window [t1, t2], which can include postevent (e.g., response to sounds) and pre-event (e.g., movement initiation) components (Luck, 2014). Temporal response functions (TRFs) can be used to characterize this relationship at the level of EEG electrodes (Lalor et al., 2009; Crosse et al., 2016). In this study, we applied mTRFs to delineate the distinct neural processes that occur simultaneously during dyadic dance. Specifically, we first extracted a diverse set of time-resolved variables, representing the following: (I) musical input, (II) self-generated movements, (III) partner-generated movements, (IV) social coordination, and (V, VI, and VII) ocular, facial, and neck muscle activity (Fig. 1c, left). Next, we estimated TRFs (Fig. 1c, middle) to quantify how these variables modulate EEG signals, for each electrode separately (Fig. 1c, right). The following sections provide a detailed explanation of these two steps.

Step 1: Extraction of variables

(I) Music. Musical input was represented using spectral flux, which captures fluctuations in the acoustic power spectrum. Spectral flux has been shown to outperform other acoustic features, such as the envelope and its derivative, in predicting neural signals elicited by music (Weineck et al., 2022). To extract it, we first bandpass filtered the musical stimuli into 128 logarithmically spaced frequency bands ranging from 100 to 8,000 Hz using a gammatone filter bank. Spectral flux was then computed for each frequency band by calculating the first derivative of the band's amplitude over time. Finally, the broadband spectral flux, representing overall changes in the spectral content, was derived by averaging the spectral flux across all 128 bands. (II and III) Self- and other-generated movements. The movements produced by each participant (self-generated) and their partners (other-generated) were represented using velocity magnitude. To reduce dimensionality, full-body trajectories were decomposed into 15 principal movement patterns that collectively explained over 95% of the kinematic variance (see below, Kinematic feature selection). The velocity magnitude of each principal movement was calculated by taking the first derivative of its position over time and then computing the absolute value of this derivative. Out of the 15 principal movements, preliminary analyses identified bounce as the movement that explained most of the neural encoding of both self- and other-generated movements (see results below, Kinematic feature selection, and Fig. 2). Consequently, only the velocity magnitude of the bounce trajectory was included in subsequent models. (IV) Social coordination. To assess social coordination, we extracted a categorical measure to determine whether the bounce movements of participants within a dyad were in-phase or anti-phase. This measure indexed whether both individuals moved in the same direction (in-phase) or opposite directions (anti-phase). We obtained this measure by multiplying the signs of the bounce velocity time series (i.e., the respective directions of movement) across the two participants forming a dyad. (V, VI, and VII) Ocular, facial, and neck muscle activity. To control for muscular activity potentially leaking into the EEG signals, we also included EOG (measured from the left and right eyes) and EMG (measured from the cheeks and the neck) time series in the models. Both EOG channels (left and right eye) were included to capture horizontal saccades, which generate opposite left–right activity (positive values on the one side and negative on the other). For cheek and neck muscles, the average signal from the left and right EMG channels was used, respectively. All these variables, each of which is time-resolved, were downsampled to match the EEG sampling frequency of 100 Hz and trimmed according to the duration of the trial-specific songs. To account for interindividual variability, all variables were standardized on a per-participant basis by normalizing each time series to its standard deviation (computed across all trials).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Neural encoding of self- and other-related movements across different principal movements (PMs). Bars represent the unique contribution (Δr) of each PM (grand-average) to the EEG signal recorded from the self (electrode Cz, red bars) or the other (electrode Oz, orange bars). Δr values represent the difference in EEG prediction accuracy between the PM-specific reduced models and the full model, for self- and other-generated movements, respectively. Error bars represent ±1 standard error of the mean (SEM). Gray circle diagrams illustrate the proportion (%) of kinematic variance explained by each PM, with the first 15 PMs accounting for >95% of the total variance. Together, these results indicate that bounce (PM10)—whether self-generated or observed in others—was the strongest predictor of EEG activity, despite accounting for only ∼1% of the kinematic variance.

Step 2: mTRF estimation

We estimated TRFs via a multivariate lagged regression, which fitted the optimal linear mapping between the abovementioned variables and EEG at each electrode (mTRF toolbox, encoding model; Crosse et al., 2016; Fig. 1c). A time-lag window of −250 to 300 ms was selected to encompass commonly observed ERP responses associated with sound perception (Novembre et al., 2018), execution of fast-repeated movements (Gerloff et al., 1997), and visual perception of biological movements (Jokisch et al., 2005). This window also ensured that the contribution of redundant (potentially irrelevant) information was minimized, especially considering the rhythmic structure of the task, with musical beats and some dance movements (e.g., bounce) occurring approximately every 500 ms. Importantly, in a control analysis we confirmed that the selected window did not reduce prediction accuracy compared with a broader [−700, +700 ms] window. For each participant and experimental condition, mTRFs were estimated, including either all variables simultaneously (full model) or all variables except the specified one (reduced models; see below for details). Participant- and condition-specific TRFs were estimated as the average TRF required to predict each of the eight condition trials using data from the remaining seven trials (i.e., TRFs were fit eight times). Regularized (ridge) regression was used to fit the TRFs, maximizing prediction accuracy (the correlation between the predicted and actual EEG data; Pearson's r) without overfitting the training data. The optimal regularization parameter (λ) was selected via leave-one-out cross-validation across trials (i.e., songs), tested over a range from 0 to 108 (0, 10−4, 10−3, …, 108). This yielded one optimal λ value per trial, condition, and participant. Finally, prediction accuracies for each condition were assessed using a generic approach (Di Liberto and Lalor, 2017; Jessen et al., 2019), where the Pearson's r between predicted and actual EEG data was calculated across all eight trials of the nth participant, with predictions based on a generic TRF averaged across the subject-specific TRFs of the N-1 remaining participants (N = 70). The prediction accuracy of a model describes the amount of EEG variance that the model can account for. To evaluate the unique amount of EEG variance that each variable accounts for, we constructed reduced models that included all variables apart from the specified one. The difference in prediction accuracy between the full (comprising all variables) and the reduced model yielded the unique contribution, denoted as Δr, of that specific variable to the variance explained in the EEG data (Fig. 1d).

Kinematic feature selection

To reduce dimensionality, we used a data-driven method to determine a subset of kinematic variables to use in the TRF models. The kinematic data were decomposed into a set of principal movements using principal component analysis (PCA), following the same pipeline as described in Bigand et al. (2024). These principal movements reflect movement primitives that are generalizable across trials, conditions, and participants. This PCA approach has been previously validated for a wide range of human movements, including dance (Troje, 2002; Daffertshofer et al., 2004; Toiviainen et al., 2010; Federolf et al., 2014; Yan et al., 2020; Bigand et al., 2021). The first 15 principal movements—accounting for >95% of the kinematic variance—were retained for further analyses (Federolf et al., 2014; Bigand et al., 2021). The score time series obtained from the PCA reflected the position of each principal movement over time. These time series were low-pass filtered below 6 Hz using a Butterworth filter (second-order, zero-phase) to increase the signal-to-noise ratio. These 15 principal movements were reminiscent of common “dance moves” such as body sway, twist, upper-body side bend and rock, bounce, side displacement, head bob, hip swing, and hand movements (Fig. 2, Movie 1; Bigand et al., 2024).

To determine which principal movements to include in the TRF models, we tested their association with EEG modulations. Previous evidence suggests that TRFs or equivalent models can accurately capture neural activity associated with both the generation (Musall et al., 2019) and the observation (O’Sullivan et al., 2017; Jessen et al., 2019) of biological movement. Accordingly, we tested the unique contribution of the 15 principal movements, either self-generated or generated by (and therefore observed in) the dancing partner. In other words, we fit 30 reduced models and computed the difference in prediction accuracy (Δr) between each reduced model and a full model (including all 30 principal movements plus spectral flux) for each participant and condition, using the generic approach outlined above (see above, mTRF estimation). Spectral flux was included in the full model to ensure that the explanatory power of individual principal movements was not influenced by movements correlated with the music as participants were dancing to music. To reduce computational cost, the 15 models for other-generated movements were trained in the visual conditions, while those for self-generated movements were trained in the nonvisual conditions. This ensured a balanced number of trials for analyzing both motor control and movement observation activities while testing movement observation under the conditions where it was most likely to occur. The full model was trained across all conditions, allowing for the computation of “self” and “other” Δr values, averaged across the two nonvisual and visual conditions, respectively.

The results of this preliminary analysis revealed that bounce movement—i.e., vertical oscillations of the body achieved through knee flexion and extension—was largely the main contributor to EEG prediction, notably across both self- and other-generated movements (Fig. 2, PM10), despite accounting for no more than 1% of the total kinematic variance. Specifically, self-generated bounce alone accounted for >84% of the EEG prediction gain (Δr > 0) across all principal movements at electrode Cz, commonly associated with motor activity (Kornhuber and Deecke, 1965; Deecke et al., 1969; Shibasaki et al., 1980; Smulders and Miller, 2011; Vercillo et al., 2018). Additionally, other-generated bounce alone accounted for >80% of EEG prediction gain at electrode Oz, a canonical site indicative of motion-evoked visual responses (Kubová et al., 1995; Bach and Ullrich, 1997; Puce et al., 2000; Jokisch et al., 2005; O’Sullivan et al., 2017). Given these results, bounce will serve as the primary movement feature in all subsequent analyses. Henceforth, when referring to “movement” in the following sections, we specifically denote “bounce” (except when discussing the results of the body-part–specific analyses).

Statistical analyses

We assessed the distinct neural encoding of music, self-generated movements, other-generated movements, and social coordination while controlling for artifact leakage from eye, facial, and neck movements. We created seven reduced models, accounting for the following: (I) music (all variables minus spectral flux); (II) self-generated movements (all variables minus velocity magnitude of self-generated bounce); (III) other-generated movements (all variables minus velocity magnitude of other-generated bounce); (IV) social coordination (all variables minus interpersonal bounce coordination); and (V, VI, and VII) ocular, facial, and neck muscle activity (all variables minus EOG, facial EMG, or neck EMG, respectively). We then compared the prediction accuracies of these reduced models with that of a full model encompassing all seven variables, i.e., the unique contribution Δr of each variable.

To compare the unique contributions of music, self-generated movements, other-generated movements, and social coordination across different experimental conditions [visual contact (yes/no) × music (same/different)], Δr values were averaged across relevant electrodes for each participant and predictor. Relevant electrodes were defined independently for each predictor as those that exhibited a prediction gain (Δr > 0). For each predictor, this gain was computed across conditions where the associated neural process was expected to occur: all conditions for music and self-related movements and visual conditions for other-generated movements and coordination. The Δr values to be statistically compared were computed for each condition and then averaged across the defined electrodes. This yielded a Δr value per participant, condition, and variable (music, self- and other-generated movements, and social coordination).

We assessed differences in unique contributions across conditions using a 2 × 2 repeated-measures ANOVA with the factors “visual contact” and “musical input.” Δr values were normally distributed and entered into the ANOVA as the dependent variable. To control for multiple comparisons across the four variables, p values were Bonferroni corrected.

Event-related potentials

Extraction of ERPs

To aid in interpreting the TRF results—particularly the physiological origins of the TRF model weights—we examined phase-locked neural responses, i.e., ERPs, evoked by changes in music intensity, self-generated movement velocity, other-generated movement velocity, and social coordination (transitions between in-phase and anti-phase coordination; see below). EEG responses are largely evoked by fast changes in the environment (Somervail et al., 2021), including fluctuations in the auditory spectrum (Weineck et al., 2022) and peaks in movement velocity (Varlet et al., 2023). Therefore, we determined the onset times of events, such as sounds or movements, by identifying peaks in the respective time series using Matlab's findpeaks function (with default parameters). Acoustic onsets were thus aligned with musical notes played by any of the four instruments in the stimuli, while motion onsets were aligned with velocity peaks. Coordination onsets were obtained from the first derivative of the coordination time series, corresponding to transitions between in-phase and anti-phase states. To improve the signal-to-noise ratio, acoustic peaks were filtered by selecting only the most salient, i.e., those that were 3 SD away from the mean of the trial. This step was unnecessary in the case of the other variables, such as movement and coordination, presumably because the kinematic data had already been low-pass filtered, as described earlier. Consequently, the signal-to-noise ratio for these variables was already maximized. The ERP epochs spanned the same time window as the TRFs (−250 to 300 ms).

ERP sensitivity to variables’ intensity

ERP amplitude largely depends on the differential intensity of the evoking change, and this sensitivity to differential intensity is supramodal, i.e., it is a property observed across different sensory systems (Somervail et al., 2021). Here, to quantify whether ERPs were modulated by the differential amplitude of musical sounds or by the speed of self- and other-generated movements, we categorized acoustic onsets into soft/loud and movement onsets into slow/fast. For each participant and experimental condition, we selected acoustic and motion onsets with the lowest and highest 20% values of spectral flux or velocity magnitude, respectively. Similarly, to quantify the ERP modulation as a function of coordination, we grouped coordination onsets into their two possible values: change to in-phase or anti-phase. Following established ERP literature (Jokisch et al., 2005; Novembre et al., 2018; Vercillo et al., 2018), epochs linked to external stimuli (music and other) were baseline corrected using a prestimulus interval (−250 to 0 ms), while epochs involving internally initiated actions (self and coordination) were baseline corrected using the entire epoch duration. Differences between the two groups (soft vs loud, slow vs fast, or in-phase vs anti-phase) were tested separately for each experimental condition, by means of a cluster-based permutation test (implemented in FieldTrip, with 1,000 permutations; Maris and Oostenveld, 2007). This analysis focused on the EEG channels of interest informed by the mTRF results: Fz (music), C3 and C4 (self-generated movements), Oz (other-generated movements), and Oz (social coordination).

Body-part–specific mTRF (self)

In the main analysis, motor activity was assessed using mTRFs predicted by the kinematics of self-generated bounce, as this movement explained most motor activity across the 15 principal movements identified via PCA. Hence, the main analysis does not differentiate between body parts, as the bounce movement activates nearly all of them (Fig. 2, Movie 1), making it challenging to determine whether the movement of specific body parts drove specific motor activities. To address this issue, we leveraged kinematic data from all parts of the body to calculate the unique contribution of self-generated motion to the EEG from the left and right hand, left and right foot, and head velocity magnitudes. Specifically, we created a full model that included major body markers (“LB Head,” “LF Head,” “RF Head,” “RB Head,” “Sternum,” “L Shoulder,” “R Shoulder,” “L Hand,” “R Hand,” “Pelvis,” “L Hip,” “R Hip,” “L Knee,” “L Foot,” “R Knee,” “R Foot”) along with neck EMG controls, and five reduced models, each excluding specific markers: left/right hand markers, left/right foot markers, and the average of the four head markers. Markers expected to be almost intrinsically correlated with hand and foot movements (e.g., elbows, wrists, and ankles) were not included in the full model. As in the main analysis, the unique contribution of each body part's kinematics to the EEG variance was determined by the difference in prediction accuracy between the full model and each reduced model.

Encoding of social coordination

Coordination beyond self and other?

Social coordination was operationalized as the spatiotemporal alignment of movements produced by participants (self-generated) and their partners (other-generated). Specifically, this construct assessed whether participants and their partners not only bounced at the same time but also in the same direction. As such, social coordination relied on both temporal features (velocity magnitude time series) and spatial features (velocity sign time series), with the latter indicating the up versus down phases of movement. In contrast, the measures of self and other were derived solely from temporal features. Consequently, it was essential to conduct a control analysis to assess the extent to which social coordination was influenced by the spatial characteristics of both self- and other-generated movements. To address this, we implemented an mTRF analysis utilizing a comprehensive model that incorporated music, self-generated and other-generated movements (velocity magnitude time series), social coordination, and the spatial directions of both self- and other-generated movements (velocity sign time series). For this control analysis, we did not include other control variables, such as muscular activity, because the previous analyses already demonstrated that these do not predict social coordination.

Coordination ERPs driven by self or other?

In our main ERP analysis, we extracted “coordination ERPs” by epoching EEG time series at transition onsets between in-phase and anti-phase coordination (see above, Extraction of ERPs). These transitions could potentially arise from changes in movement direction elicited by either the self or the partner. To disentangle these two possibilities—specifically, whether ERPs related to social coordination were driven by self-generated movements (self) or by partner-generated movements (other)—we categorized coordination ERPs into two distinct groups: those triggered by self-movements (i.e., when coordination changes were aligned with shifts in the velocity sign of self-generated movements) and those triggered by partner movements (i.e., when coordination changes aligned with shifts in the velocity sign of other-generated movements). We quantified the in-phase/anti-phase ERP modulation separately for these two groups, following the methods outlined previously (see above, ERP sensitivity to variables' intensity). Differences between in-phase and anti-phase onsets were assessed independently for the “self” and “other” groups in each experimental condition using a cluster-based permutation test (implemented in FieldTrip with 1,000 permutations; Maris and Oostenveld, 2007). This analysis focused on the channel of interest informed by the mTRF results: Oz.

Results

Multivariate temporal response functions

In our analysis, we assessed the unique contributions of four variables to the EEG by comparing the prediction gain (Δr) between a full mTRF model and reduced models excluding each variable of interest (see Materials and Methods for details). The results showed that musical sounds, self-generated movements, other-generated movements, and social coordination each made distinct contributions to participants’ neural activity. This allowed us to isolate four neural processes co-occurring during dyadic dance: (I) auditory perception of music, (II) control of movement, (III) visual perception of the partner's body movements, and (IV) visual tracking of social coordination. These processes were clearly distinguished from ocular, facial, and neck muscle artifacts (V, VI, and VII). The following sections provide detailed information on each of these EEG activities.

(I) Auditory perception of music. The spectral flux of the music uniquely predicted EEG across frontal and parietal electrodes, as evidenced by the prediction gain Δr (the difference between the prediction of the full model and that of the reduced model excluding spectral flux) at each electrode (Fig. 3a). A repeated-measures ANOVA, with “musical input” and “visual contact” as factors, yielded a main effect of vision, demonstrating a significant reduction in the prediction gain Δr when participants could see each other (F(1,57) = 7.48, p = 0.033; Fig. 4). This finding suggests a diminished neural tracking of music when participants could see their partners. The regression weights associated with the music TRF model (representing electrode Fz) highlighted three poststimulus modulations, i.e., a positive-negative-positive pattern with peaks at approximately +60, +120, and +200 ms poststimulus, respectively (Fig. 3b). The weights also exhibited a consistent peak at approximately −200 ms prestimulus, which, considering the periodic rhythmic nature of the music, is likely evoked by the preceding beat sound. We confirmed so by observing that the sound differential intensity (specifically, the spectral flux value) of the previous beat modulated the amplitude of this −200 ms peak.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Distinct EEG activities related to music, self- and other-generated movements, social coordination, and muscle artifacts. a, Topographical maps of the unique contribution of each model variable to the predicted EEG. Δr topographical maps represent the grand-average difference in EEG prediction accuracy between the reduced models (excluding the variable of interest) and the full model (including all four variables, plus ocular, facial, and neck muscle activity; Fig. 1d), for each EEG electrode and experimental condition. b, Ridge regression weights for TRFs corresponding to music (Fz), self-generated movements (averaged across C3, C4), other-generated movements (Oz), social coordination (Oz), and ocular (F8), facial (T8), and neck (Oz) muscle activity for the full-model TRF. Grand-average weights are shown. Shaded areas represent ±1 SEM.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Comparison of unique contributions across experimental conditions. Bars indicate the grand-average unique contributions (averaged over electrodes showing a gain; Δr > 0) of each model variable, across conditions. Error bars represent ±1 SEM. Stars indicate significant main effects of visual contact and musical input, as well as the interaction between the two factors (2 × 2 repeated-measures ANOVA, Bonferroni corrected; *pbonf < 0.05, **pbonf < 0.01, ***pbonf < 0.001).

(II) Control of movement (self-generated). Self-generated movements uniquely predicted EEG across central and occipital electrodes, as indicated by the electrode-specific prediction gain Δr (Fig. 3a). The ANOVA did not yield evidence of significant effects of musical input or visual contact on the unique contribution of self-generated movements on EEG signals, suggesting comparable motor control processes across conditions (all ps > 0.224; Fig. 4). The TRF weights associated with self-generated movements (representing the average between electrodes C3 and C4) highlighted three main modulations, i.e., a negative-positive-negative pattern with peaks at approximately −100, 0, and +80 ms relatively to movement onset, respectively (Fig. 3b).

(III) Visual perception of partner's body movements (other-generated). Other-generated movements uniquely predicted EEG across occipital electrodes, surrounding the visual cortex (Fig. 3a), only when participants could see each other. This was confirmed by the ANOVA, yielding a main effect of visual contact (F(1,57) = 83.23, p < 0.001; Fig. 4). This finding is consistent with the expectation that neural tracking of others’ movements can only occur when these movements are observable. The TRF weights associated with other-generated movements (representing electrode Oz) highlighted a biphasic modulation characterized by a positive peak at approximately +70 ms and a negative peak at approximately +160 ms relative to movement onset (Fig. 3b).

(IV) Social coordination. Social coordination uniquely predicted EEG primarily across occipital electrodes (Fig. 3a), especially when participants could see each other and listened to the same music. This was supported by the ANOVA, which yielded main effects of visual contact (F(1,57) = 249.75, p < 0.001) and musical input (F(1,57) = 30.22, p < 0.001), along with a significant interaction effect (F(1,57) = 50.10, p < 0.001; Fig. 4). Follow-up comparisons revealed that EEG prediction accuracy was specifically enhanced when participants danced to the same music, but only with visual contact (Δ = 0.0009, SE = 0.0001, p < 0.001); no music effect was observed without visual contact (p = 0.676; Fig. 4). This suggests that the level of coordination between participants is encoded in each participant's EEG and that neural tracking occurs primarily when partners are visible and synchronizing to the same musical tempo. In this condition, the TRF weights (representing electrode Oz) exhibited a quadriphasic pattern characterized by negative-positive-negative-positive peaks, at −180, −90, +30, and +160 ms relative to a change in coordination (between in-phase and anti-phase—see Materials and Methods), respectively (Fig. 3b).

(V, VI, and VII) Ocular, facial, and neck muscle artifacts. EOG and EMG signals uniquely predicted EEG at electrode sites that closely matched artifactual topographical maps documented in previous EEG research (Fig. 3a; Goncharova et al., 2003; Plöchl et al., 2012). Specifically, EMG from facial and neck muscles predicted EEG activity at the scalp periphery, which is typical of muscle contraction topographies (Goncharova et al., 2003), while EOG predicted EEG activity near the eyes (electrodes AF7 and AF8, approaching the lateral canthi), characteristic of eye saccades (Plöchl et al., 2012). Note that most blinks-related artifacts were presumably removed beforehand via ASR and ICA pipelines (see Materials and Methods). The TRF weights for eye, facial, and neck movements displayed features of instantaneous impulse responses (Fig. 3b), indicating that noncerebral signals propagate to the EEG without measurable delay—a characteristic previously established for artifact leakage (Croft and Barry, 2000). Additionally, these EOG and EMG signals contributed orders of magnitude more to the EEG than brain processes (compare Δr scales within Fig. 3a), another expected property of EMG and EOG activations (Urigüen and Garcia-Zapirain, 2015). Collectively, these findings underscore the efficiency of our analysis in distinguishing simultaneous neurophysiological processes from each other, as well as from movement-related artifact leakage.

Event-related potentials

To elucidate the physiological origins of the temporal responses, or model weights, modeled by the mTRFs, we extracted ERPs by epoching the EEG time series around salient changes (see Materials and Methods) in music, self-generated movements, other-generated movements, and social coordination. Notably, the focus was not on the condition-specific contribution of these responses to the EEG, as ERP analysis cannot fully account for concurrent contributions from other variables. Rather, the goal was to demonstrate that the ERPs exhibited (1) morphologies closely resembling the TRF weights observed earlier (compare Figs. 5, 3b) and (2) physiological properties consistent with typical EEG markers of sensory and motor processes, as established in laboratory-controlled studies. Detailed ERP results for each process are presented in the following sections.

(I) Auditory perception of music. The individual sounds embedded within the musical tracks elicited a characteristic triphasic ERP response, consisting of an early positivity (P50), followed by a widespread negativity (N100), and a later positivity (P200), all displaying a frontal topographic distribution (Fig. 5). This pattern aligns with established findings from both ERP (Novembre et al., 2018; Di Liberto et al., 2020) and TRF studies (Di Liberto et al., 2015, 2020; Fiedler et al., 2019; Jessen et al., 2019; Kern et al., 2022) in motionless participants and closely resembles the regression weights of the music TRF observed in our study with dancing participants (Fig. 3b). These similarities suggest that our music TRF primarily captured phase-locked responses evoked by the individual sounds embedded within the musical stimuli, as observed in previous work (Di Liberto et al., 2020; Bianco et al., 2024). To validate this assumption, we further report a known physiological property of these responses—ERP amplitude sensitivity to variations in stimulus intensity (Somervail et al., 2021)—as evidenced by the amplitude of the P200 being larger in response to loud versus soft sounds (Fig. 5).

(II) Control of movement (self-generated). ERPs time-locked to self-generated movements displayed a triphasic pattern, characterized by a premotor negativity (N-100), a positivity at movement onset (P0), and a postmotor negativity (N100), with a central distribution (Fig. 5). These components are reminiscent of movement-related cortical potentials (Shibasaki et al., 1980; Hallett, 1994), which might include steady-state movement-evoked potentials (Gerloff et al., 1997) or readiness potentials (Kornhuber and Deecke, 1965; Vercillo et al., 2018; see also body-part–specific analyses and Fig. 6). The pattern closely mirrors the regression weights observed in our TRF model of self-generated movements (Fig. 3b). Notably, the amplitude of the ERPs associated with self-generated movements was larger during relatively faster, as opposed to relatively slower, movements, a pattern previously suggested to reflect increased motor activity during higher-rate movement execution (Brunia et al., 2011). These results suggest that the mTRF model effectively captured EEG potentials traditionally linked to motor control, with amplitudes modulated by movement speed.

(III) Visual perception of partner's body movements (other-generated). When the participants could see each other, the observed partner-generated movements elicited biphasic responses in occipital regions, characterized by a positive peak at ∼70 ms (P70) and a negative peak at ∼160 ms (N160; Fig. 5). This pattern resembles traditional visual responses to biological motion, notably characterized by the N170 component, typically observed at ∼170 ms postmovement onset (Kubová et al., 1995; Bach and Ullrich, 1997; Puce et al., 2000; Jokisch et al., 2005). Similar to the responses associated with music and self-generated movements, these ERPs closely align with the regression weights yielded by the TRF model of other-generated movements (Fig. 3b). As for ERPs evoked by motor control (previous section), ERP amplitudes scaled with movement speed, most prominently under visual contact in the different-music condition (in contrast, the modulation in the same-music condition was less clear, likely obscured by concurrent neural processes not considered in the ERP analysis, such as those related to coordination). The increased ERP amplitudes for faster compared with slower movements (Fig. 5) further highlight a well-established physiological property of sensory ERPs: their sensitivity to variations in stimulus intensity (Somervail et al., 2021).

(IV) Social coordination. ERPs time-locked to changes in social coordination were associated with quadriphasic EEG modulations in occipital regions across all conditions. However, ERP amplitude differences between changes to in-phase versus anti-phase coordination emerged only when participants could see their partners and listened to same-tempo music (Fig. 5). The response pattern appears to bridge motor control and movement observation processes, showing a triphasic N-P-N sequence, similar to self-generated movement ERPs, followed by a positive occipital peak at 160 ms postonset—resembling the inverted polarity of the posterior N160 observed for other-generated movements. The presence of a clear pattern in nonvisual conditions suggests that these ERPs partially reflect motor activity, as changes in coordination coincide with movement initiation by either the self or the partner, a confound that traditional ERP analysis fails to fully resolve, unlike TRF analysis. Supporting this interpretation, ERP amplitude in nonvisual conditions did not vary between changes to in-phase and anti-phase (Fig. 5), and the TRFs—designed to disentangle concurrent processes—did not reveal any EEG activity related to coordination, beyond motor activity, in the nonvisual conditions (Fig. 3b).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Event-related potential (ERP) analysis. ERPs evoked by salient changes in music, self-generated movements, other-generated movements, and social coordination. EEG time series were epoched to peaks of spectral flux for music, peaks of velocity magnitude for self- and other-generated movements, and changes between in-phase and anti-phase for social coordination. ERP amplitudes were compared across two groups of epochs within each variable: loud versus soft sounds for music (Fz), fast versus slow movements for self-generated (averaged across C3, C4) and other-generated (Oz) movements, and changes to in-phase versus anti-phase for social coordination (Oz). Grand-average ERPs are shown for the two groups of epochs within each variable and across all experimental conditions. Colored shaded areas represent ±1 SEM, while gray shaded regions highlight significant differences in ERP amplitude between groups of epochs at a given time point (permutation test over time, at the electrode of interest, cluster-corrected). Topographical maps display amplitude differences across electrodes within the time windows of identified clusters.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

mTRFs tease apart body-part–specific motor activity. a, Topographical maps of the unique contribution of (self-generated) left- and right-hand movements to the predicted EEG. Δr topographical maps represent the grand-average difference in EEG prediction accuracy between the reduced models (excluding the body part of interest) and the full model (including all body parts, plus the neck control variable), for each EEG electrode and across all trials, regardless of experimental condition. We ran the TRF models without considering experimental conditions, given that no statistical difference was found across conditions in our main analysis (Fig. 4). Separate TRF models for hands were derived by excluding each marker (“L Hand” or “R Hand”) from the full model. b, Same as a, but for left- and right-foot movements. Separate TRF models for feet were derived by excluding each marker (“L Foot” or “R Foot”) from the full model. c, Same as a and b, but for head movements. The head TRF model was derived by excluding all four head markers together.

mTRFs tease apart body-part–specific motor activity

Thus far, the EEG activity related to self-generated movements was extracted using the velocity time series of bounce movements (see Materials and Methods), either to predict EEG signal (mTRF analysis) or to time-lock EEG epochs (ERP analysis). To determine whether specific body parts contribute to distinct motor activities, we performed an additional TRF analysis using velocity time series associated with five distinct body parts: left and right hands, left and right feet, and head. Rather than relying on principal movements extracted from PCA, we modeled EEG signals using the kinematics of these specific body parts as input variables in the TRF models (Fig. 6). The unique contributions of the left and right hands (beyond that of all other body parts) to the EEG prediction exhibited lateralized spatial maps at central sites (Fig. 6a), a typical marker of hands’ motor control (Kornhuber and Deecke, 1965; Deecke et al., 1969; Shibasaki et al., 1980; Gerloff et al., 1997; Smulders and Miller, 2011; Vercillo et al., 2018; O’Neill et al., 2024). Furthermore, feet movements exhibited a more posterior topographical activation than hands (Fig. 6b), reminiscent of EEG differences found when comparing motor activity across hands and feet (Brunia et al., 2011). Notably, these feet-related EEG activities showed no clear lateralization, which is expected given the organization of the motor cortex (Gordon et al., 2023; O’Neill et al., 2024) and the limited spatial resolution of EEG (Osman et al., 2005). Indeed, as the feet are represented in the deeper, more central regions of the motor cortex, along the inner surface of the longitudinal fissure, it is notoriously difficult to differentiate EEG activity evoked by left versus right foot movements (Osman et al., 2005; Jensen et al., 2023). Finally, head movements were associated with EEG activity not only in motor sites, such as C3 and C4 electrodes but also in occipital regions (Fig. 6c). Importantly, this occipital activation did not result from neck muscle artifact leakage, as neck EMG's contribution was already accounted for in the full model (see Materials and Methods). Moreover, no such occipital activation was found for hand or foot movements, suggesting that head movements specifically involve visual (besides motor) processing. Visual processes could be at play when moving the head (e.g., bouncing or head bobbing) as this involves salient changes in the field of view. Taken together, these findings support the conclusion that our TRF and ERP analyses (Figs. 3, 5) efficiently isolated neural processes related to self-generated movements. Moreover, beyond validating these prior results, this new analysis demonstrates the feasibility of isolating motor activity of specific body parts [note that in the prior analyses, motor activity related to bounce (involving all body parts) was assessed, limiting the possibility to isolate body-part–specific motor activity].

Social coordination encoding acts beyond self and other

Coordination beyond self and other

In previous analyses, we demonstrated that adding the social coordination variable to models including music, self-, and other-generated movements yielded a gain in EEG prediction, suggesting neural encoding of coordination (Figs. 3, 4). To ensure this gain was not solely attributed to the inclusion of spatial direction features—i.e., up versus down phases of bounce movement inherent in the social coordination variable but absent in the self- and other-generated movement variables—we conducted a supplementary mTRF analysis that included the spatial directions of both self- and other-generated movements. This analysis yielded cross-condition differences in unique contributions (i.e., Δr) that were identical to those observed in our primary analysis (compare Fig. 7a, top, with Figs. 3a, 4), along with consistent model weights (compare Fig. 7a, bottom, with Fig. 3b). These findings indicate that the encoding of social coordination extends beyond merely representing the spatial directions of self- and other-generated movements in isolation. These results further suggest that the encoding of coordination is not merely driven by a modulation of the partner-evoked visual processes as a function of whether the self is moving congruently or not congruently with the partner (hence suppressing or amplifying the observed movements relative to the field of view). To further show that the encoding of coordination was not solely capturing this, we conducted an additional control analysis for which we rereferenced the other-generated movements to the position of the self. Even following such rereferencing, social coordination yielded a significant and unique contribution to EEG recorded from occipital sites, and this contribution was most pronounced under conditions of visual contact and shared music. Taken together, these findings indicate that the reported encoding of coordination is linked to a high-order process tracking the alignment between self- and partner-related movements, independently of the encoding of self and other taken alone.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Tracking of social coordination beyond self and other. a, Results of the mTRF models associated with social coordination, incorporating spatial directions of self- and other-generated movements. Top, Social coordination uniquely predicted EEG activity at electrode sites similar to those reported in the main analysis (Fig. 3a). Statistics revealed the exact same differences in unique contribution as observed in our main analysis (Fig. 4). Bottom, TRF regression weights exhibited similar patterns as in the main analysis (Fig. 3b). b, Coordination-related ERPs time-locked to self-generated (top) or other-generated (bottom) movement changes, at electrode Oz. ERPs related to changes to in-phase and anti-phase coordination are represented by continuous and dashed lines, respectively. Grand-average ERPs are shown for the two groups of trials associated with each variable across all experimental conditions. Colored shaded areas represent ±1 SEM, while gray shaded regions highlight significant differences in ERP amplitude between groups of trials at a given time point (permutation test over time, at Oz, cluster-corrected). Topographical maps display amplitude differences across electrodes within the time windows of identified clusters.

Movie 1.

The principal (dance) movements, related to Figure 2. Video showing original movement data (left) and their decomposition into 15 principal movements (PMs) explaining >95% of the kinematics variance (right). Representative data are displayed (excerpt from a single trial, corresponding to when participants listened to the full refrain of the song). For the sake of clarity, the PMs are animated with different levels of exaggeration [i.e., the PM scores were amplified by a factor of 1.5 (PM3), 2 (PM4, 7, 9, 11, 15), 3 (PM8), or not amplified (all other PMs)]. The PMs are reminiscent of common dance moves (spelled out in italics). [View online]

Coordination ERPs are time-locked to other-generated movements

“Coordination ERPs” were extracted by epoching EEG time series to shifts from anti-phase to in-phase coordination and, vice versa, from in-phase to anti-phase. Here we investigated how such ERPs changed as a function of whether the shifts were driven by changes in movement direction produced by either the self (movement production) or the other (movement observation; see Materials and Methods). Our analysis indicated that the EEG modulations previously associated with changes to in-phase coordination, specifically observed at occipital sites (Oz) and specifically under conditions of visual contact and same-tempo music (Fig. 5), were present only when these changes were time-locked to other-generated movement changes (Fig. 7b). This indicates that larger amplitude ERPs are evoked when a partner initiates a change in movement direction that leads to in-phase coordination compared with a change in movement direction that leads to anti-phase coordination. This result further strengthens the conclusion that the brain encodes social coordination and that this encoding is specifically driven by movements of the partner being in phase versus anti-phase with respect to self-initiated movements.

Discussion

This study demonstrates that the neural processes underlying dance—a complex, natural, and social behavior—can be effectively isolated from EEG signals recorded from dyads dancing together. Using multivariate TRF models applied to dual EEG and full-body kinematics, we disentangled intertwined neural processes, separated them from movement artifacts, and confirmed their physiological origins through ERP analyses. This approach delineated sensory and motor processes underlying free-form, naturalistic dance: (I) auditory tracking of music, (II) control of self-generated movements, and (III) visual monitoring of partner-generated movements. Crucially, we also uncovered a previously unknown neural marker of social processing: (IV) visual encoding of social coordination, which emerges only when partners can make visual contact, is topographically distributed over the occipital areas and is driven by movement observation rather than initiation. Additionally, movement-specific models highlighted “bounce” as the primary dance move driving EEG activity associated with both self-generated movements and movements observed in the partner. Together, these findings illustrate how advanced neural analysis techniques can illuminate the mechanisms supporting complex natural behaviors.

mTRFs can unravel the complex orchestration of natural behavior

Recent advancements in mobile imaging and denoising techniques have enhanced our ability to study neural activity during real-world behavior (Bateson et al., 2017; Niso et al., 2023). However, disentangling the contribution of the multiple simultaneous neural processes remains challenging. In this study, we addressed this challenge within the context of a spontaneous, interactive, yet controlled task, balancing ecological validity with experimental control (D’Ausilio et al., 2015). Using mTRFs, we successfully isolated four distinct, yet overlapping, neural processes underlying dyadic dance. ERP analyses confirmed that mTRF modeled responses, or model weights, align with well-characterized EEG potentials linked to sensory perception and motor control. This result suggests that mTRFs can capture physiologically established signals, akin to ERP analyses, but in real-world scenarios with multiple concurrent activities—contexts where traditional ERP approaches fall short.

ERP analyses struggle to isolate the unique contributions of individual processes amid overlapping neural activities. This limitation is evident in our results: visual ERP modulation to movement speed was weak under visual contact and same-music conditions (Fig. 5), while mTRFs captured robust visual tracking (Figs. 3, 4). This discrepancy likely stems from social coordination activity, which ERP analysis cannot disentangle, and which was particularly prominent in these conditions. Similarly, coordination ERPs appeared in nonvisual conditions (Fig. 5), whereas mTRFs showed no corresponding activity, likely reflecting unaccounted self-motor contributions in ERP analyses. Although techniques like frequency tagging have addressed some of these challenges (Varlet et al., 2020, 2023; Cracco et al., 2022), they are limited to identifying periodic EEG responses and typically focus on univariate kinematics (e.g., gait cycles or hand trajectories). In contrast, mTRFs offer a precise characterization of neural responses to diverse features, effectively separating them from concurrent activities.

The interplay between music and partner tracking

Dyadic dance requires simultaneous sensory processing of music and a partner's movements, both of which contribute to coordinated behavior (Bigand et al., 2024). To what extent do these concurrent streams of information influence EEG activity, and how are these effects modulated by social factors like visual contact? Our findings show that model weights and ERPs associated with music, partner movements, and coordination exhibit similar amplitude, suggesting that each element—whether a musical sound, observed movement, or change in coordination—elicits an EEG response of comparable magnitude. Notably, visual processes accounted for less variance at occipital sites than auditory processes at frontal sites (Fig. 3a, Δr scales). This may reflect the broader range of EEG signals in occipital regions, which likely include visual processing of not only partner movements but also other visual cues and, importantly, artifactual leakage from neck movements (Fig. 3a).

In visual-contact conditions, where both music (acoustic) and partner (visual) information were present, we observed a decrease in music tracking (Fig. 4). This reduction may arise from competition between visual and auditory modalities for attentional resources (Woods et al., 1992; Lavie, 2005; Molloy et al., 2015), especially in naturalistic dance, where both auditory and visual inputs drive coordination (Bigand et al., 2024). Naturalistic dance likely places heightened demands on visual input, as recent findings suggest that visual drivers dominate full-body rhythmic synchronization—a phenomenon not observed in simpler tasks like finger tapping (Nguyen et al., 2024).

Movement control and observation

Our principal component analysis of dance kinematics revealed that bounce movements accounted for most EEG activity associated with self-generated movements (Fig. 2). Intriguingly, these movements predicted EEG activity not only over motor areas (e.g., electrodes C3 and C4), but also at occipital sites (Fig. 3a). To better understand these activities, we further dissected the components of bounce control, pinpointing motor activity specific to different body parts (Fig. 6). In participants engaged in free-form dancing, we successfully replicated established EEG findings observed during isolated movements, with more posterior-medial activity associated with foot movements and more central-lateralized activity for hand movements (Brunia et al., 2011). Notably, our analysis showed that head displacement was linked to occipital brain activity in addition to central motor activity, likely due to visual responses resulting from changes in the visual field (Testard et al., 2024). This analysis clarifies why the main mTRF for self-generated bounce movements included activity at occipital sites (Fig. 3a), suggesting that bouncing not only involves motor activity but also induces significant visual changes.

Bounce also emerged as the movement most predictive of EEG activity linked to visual tracking of a partner's movements. This finding raises an intriguing question: what makes bounce particularly captivating compared with other dance movements? Our previous research has highlighted bounce's key role in fostering interpersonal coordination (Bigand et al., 2024), serving as a supramodal (audio-visual) pace-setter between participants and their partners. This may explain why bounce is so prominent in predicting EEG activity associated with movement observation. This finding also suggests that EEG signals are particularly sensitive to salient movement changes, rather than merely high-amplitude movements. Indeed, while bounce explained <1% of the total kinematic variance (ranking 10th in the PCA), it accounted for over 80% of the EEG variance. This likely reflects bounce's heightened salience, possibly driven by the fact that this movement was the only one peaking sharply with each musical beat (Bigand et al., 2024).

Encoding of social coordination

Our study reveals that coordination between self- and other-generated movements uniquely predicts EEG signals recorded at occipital electrodes. Recent research in social neuroscience has shown that EEG can delineate separate components supporting coordinated behaviors: some monitor self- and partner-generated actions distinctly, while others integrate the joint action outcome produced by oneself and the partner (Novembre et al., 2016; Varlet et al., 2020). In line with this, we identified three distinct neural processes—control of one's own movements, observation of a partner's movements, and processing of social coordination (Fig. 3)—and observed heightened coordination tracking in conditions where musical synchrony between participants was greater. Importantly, our findings suggest that the encoding of coordination does not merely combine the individual “self” and “other” processes; rather, it captures a distinct neural representation of their coordination (see Results and Fig. 7a).

The temporal response underlying social coordination tracking integrates both motor control (self) and movement observation (other), as evidenced by the N-P-N pattern and a subsequent modulation peaking ∼160 ms. Notably, this response appears to be triggered by observing a partner's movements, rather than initiating one's own actions. ERPs associated with changes in the partner's movements—rather than self-initiated actions—were modulated by social coordination at visual sites (see Results and Fig. 7b). This finding suggests that neural tracking of coordination is more reliant on visual monitoring of the partner than on the internal control of one's own movements, aligning with earlier observations that this process is localized in visual areas and enhanced during visual contact.

In summary, we identified a previously unknown neural marker of social processing, with five key observations: (1) it is topographically distributed over the occipital areas; (2) it emerges when participants can see each other and is most pronounced when musical synchrony between them is high; (3) its underlying neural signal integrates components from both self and other processes; yet (4) rather than merely combining the individual “self” and “other” components, it represents a distinct neural encoding of their coordination; and (5) it is primarily anchored to movement observation, not movement initiation.

Bridging traditional physiology with real-world applications

Our findings show that neurophysiological signals, traditionally examined in controlled settings, can be disentangled and analyzed within real-world contexts. This highlights the potential for future research to incorporate ecologically valid stimuli and behavioral predictors (e.g., body movements, eye gaze, speech) into multivariate modeling. Such an approach could deepen our understanding of brain processes during live social interactions—a field of growing significance across human adult (Dumas et al., 2010; Pan et al., 2018; Koul et al., 2023; Cross et al., 2024; Orgs et al., 2024), developmental (Wass et al., 2018, 2020; Nguyen et al., 2020, 2021, 2023), and animal studies (Zhang and Yartsev, 2019; Rose et al., 2021; Yang et al., 2021).

Data Availability

EEG, EMG, EOG, kinematic, and musical data have been deposited to IIT Dataverse and are publicly available as of the date of publication: https://doi.org/10.48557/UVZGX6. All original code is publicly available on Github repositories: https://github.com/felixbgd/dancing_brain.

Footnotes

  • F.B., S.F.A., and G.N. are supported by the European Research Council (ERC, MUSICOM, 948186). R.B. is supported by the European Union (MSCA, PHYLOMUSIC, 101064334). T.N. is supported by the European Union (MSCA, SYNCON, 101105726). The Open University Affiliated Research Centre at Istituto Italiano di Tecnologia (ARC@IIT) is part of the Open University, Milton Keynes MK7 6AA, United Kingdom. We thank Alison Rigby for her help with data collection and Raoul Tchoï for his help creating the stimuli.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Félix Bigand at felix.bigand{at}iit.it or Giacomo Novembre at giacomo.novembre{at}iit.it.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    1. Bach M,
    2. Ullrich D
    (1997) Contrast dependency of motion-onset and pattern-reversal VEPs: interaction of stimulus type, recording site and response component. Vision Res 37:1845–1849. https://doi.org/10.1016/S0042-6989(96)00317-3
    OpenUrlCrossRefPubMed
  2. ↵
    1. Barraza P,
    2. Dumas G,
    3. Liu H,
    4. Blanco-Gomez G,
    5. van den Heuvel MI,
    6. Baart M,
    7. Pérez A
    (2019) Implementing EEG hyperscanning setups. MethodsX 6:428–436. https://doi.org/10.1016/j.mex.2019.02.021 pmid:30906698
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bateson AD,
    2. Baseler HA,
    3. Paulson KS,
    4. Ahmed F,
    5. Asghar AUR
    (2017) Categorisation of mobile EEG: a researcher’s perspective. Biomed Res Int 2017:5496196. https://doi.org/10.1155/2017/5496196 pmid:29349078
    OpenUrlPubMed
  4. ↵
    1. Bianco R,
    2. Zuk NJ,
    3. Bigand F,
    4. Quarta E,
    5. Grasso S,
    6. Arnese F,
    7. Ravignani A,
    8. Battaglia-Mayer A,
    9. Novembre G
    (2024) Neural encoding of musical expectations in a non-human primate. Curr Biol 34:444–450.e5. https://doi.org/10.1016/j.cub.2023.12.019
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bigand F,
    2. Bianco R,
    3. Abalde SF,
    4. Novembre G
    (2024) The geometry of interpersonal synchrony in human dance. Curr Biol 34:3011–3019.e4. https://doi.org/10.1016/j.cub.2024.05.055 pmid:38908371
    OpenUrlCrossRefPubMed
  6. ↵
    1. Bigand F,
    2. Prigent E,
    3. Berret B,
    4. Braffort A
    (2021) Decomposing spontaneous sign language into elementary movements: a principal component analysis-based approach. PLoS One 16:e0259464. https://doi.org/10.1371/journal.pone.0259464 pmid:34714862
    OpenUrlCrossRefPubMed
  7. ↵
    1. Brunia CHM,
    2. van Boxtel GJM,
    3. Böcker KBE
    (2011) Negative slow waves as indices of anticipation: the bereitschaftspotential, the contingent negative variation, and the stimulus-preceding negativity. In: The Oxford handbook of event-related potential components (Kappenman ES, Luck SJ, eds), pp 190–208. Oxford: Oxford University Press.
  8. ↵
    1. Cracco E,
    2. Lee H,
    3. van Belle G,
    4. Quenon L,
    5. Haggard P,
    6. Rossion B,
    7. Orgs G
    (2022) EEG frequency tagging reveals the integration of form and motion cues into the perception of group movement. Cereb Cortex 32:2843–2857. https://doi.org/10.1093/cercor/bhab385 pmid:34734972
    OpenUrlCrossRefPubMed
  9. ↵
    1. Croft RJ,
    2. Barry RJ
    (2000) Removal of ocular artifact from the EEG: a review. Neurophysiol Clin 30:5–19. https://doi.org/10.1016/S0987-7053(00)00055-1
    OpenUrlCrossRefPubMed
  10. ↵
    1. Cross ES,
    2. Darda KM,
    3. Moffat R,
    4. Muñoz L,
    5. Humphries S,
    6. Kirsch LP
    (2024) Mutual gaze and movement synchrony boost observers’ enjoyment and perception of togetherness when watching dance duets. Sci Rep 14:24004. https://doi.org/10.1038/s41598-024-72659-7 pmid:39402066
    OpenUrlCrossRefPubMed
  11. ↵
    1. Crosse MJ,
    2. Di Liberto GM,
    3. Bednar A,
    4. Lalor EC
    (2016) The multivariate temporal response function (mTRF) toolbox: a MATLAB toolbox for relating neural signals to continuous stimuli. Front Hum Neurosci 10:604. https://doi.org/10.3389/fnhum.2016.00604 pmid:27965557
    OpenUrlCrossRefPubMed
  12. ↵
    1. Daffertshofer A,
    2. Lamoth CJC,
    3. Meijer OG,
    4. Beek PJ
    (2004) PCA in studying coordination and variability: a tutorial. Clin Biomech 19:415–428. https://doi.org/10.1016/j.clinbiomech.2004.01.005
    OpenUrlCrossRef
  13. ↵
    1. D’Ausilio A,
    2. Novembre G,
    3. Fadiga L,
    4. Keller PE
    (2015) What can music tell us about social interaction? Trends Cogn Sci 19:111–114. https://doi.org/10.1016/j.tics.2015.01.005
    OpenUrlCrossRefPubMed
  14. ↵
    1. Deecke L,
    2. Scheid P,
    3. Kornhuber HH
    (1969) Distribution of readiness potential, pre-motion positivity, and motor potential of the human cerebral cortex preceding voluntary finger movements. Exp Brain Res 7:158–168. https://doi.org/10.1007/BF00235441
    OpenUrlCrossRefPubMed
  15. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21. https://doi.org/10.1016/j.jneumeth.2003.10.009
    OpenUrlCrossRefPubMed
  16. ↵
    1. Desai M,
    2. Field AM,
    3. Hamilton LS
    (2024) A comparison of EEG encoding models using audiovisual stimuli and their unimodal counterparts. PLoS Comput Biol 20:e1012433. https://doi.org/10.1371/journal.pcbi.1012433 pmid:39250485
    OpenUrlCrossRefPubMed
  17. ↵
    1. Di Liberto GM,
    2. Barsotti M,
    3. Vecchiato G,
    4. Ambeck-Madsen J,
    5. Del Vecchio M,
    6. Avanzini P,
    7. Ascari L
    (2021) Robust anticipation of continuous steering actions from electroencephalographic data during simulated driving. Sci Rep 11:23383. https://doi.org/10.1038/s41598-021-02750-w pmid:34862442
    OpenUrlCrossRefPubMed
  18. ↵
    1. Di Liberto GM,
    2. Lalor EC
    (2017) Indexing cortical entrainment to natural speech at the phonemic level: methodological considerations for applied research. Hear Res 348:70–77. https://doi.org/10.1016/j.heares.2017.02.015
    OpenUrlCrossRefPubMed
  19. ↵
    1. Di Liberto GM,
    2. O’Sullivan JA,
    3. Lalor EC
    (2015) Low-frequency cortical entrainment to speech reflects phoneme-level processing. Curr Biol 25:2457–2465. https://doi.org/10.1016/j.cub.2015.08.030
    OpenUrlCrossRefPubMed
  20. ↵
    1. Di Liberto GM,
    2. Pelofi C,
    3. Bianco R,
    4. Patel P,
    5. Mehta AD,
    6. Herrero JL,
    7. de Cheveigné A,
    8. Shamma S,
    9. Mesgarani N
    (2020) Cortical encoding of melodic expectations in human temporal cortex (Peelle JE, Shinn-Cunningham BG, eds). Elife 9:e51784. https://doi.org/10.7554/eLife.51784 pmid:32122465
    OpenUrlCrossRefPubMed
  21. ↵
    1. Dumas G,
    2. Nadel J,
    3. Soussignan R,
    4. Martinerie J,
    5. Garnero L
    (2010) Inter-brain synchronization during social interaction. PLoS One 5:e12166. https://doi.org/10.1371/journal.pone.0012166 pmid:20808907
    OpenUrlCrossRefPubMed
  22. ↵
    1. Dunbar RI
    (2012) On the evolutionary function of song and dance. In: Music, language, and human evolution (Bannan N, ed), pp 201–214. Oxford: Oxford University Press.
  23. ↵
    1. Federolf P,
    2. Reid R,
    3. Gilgien M,
    4. Haugen P,
    5. Smith G
    (2014) The application of principal component analysis to quantify technique in sports. Scand J Med Sci Sports 24:491–499. https://doi.org/10.1111/j.1600-0838.2012.01455.x
    OpenUrlCrossRefPubMed
  24. ↵
    1. Fiedler L,
    2. Wöstmann M,
    3. Herbst SK,
    4. Obleser J
    (2019) Late cortical tracking of ignored speech facilitates neural selectivity in acoustically challenging conditions. Neuroimage 186:33–42. https://doi.org/10.1016/j.neuroimage.2018.10.057
    OpenUrlCrossRefPubMed
  25. ↵
    1. Foster Vander Elst O,
    2. Foster NHD,
    3. Vuust P,
    4. Keller PE,
    5. Kringelbach ML
    (2023) The neuroscience of dance: a conceptual framework and systematic review. Neurosci Biobehav Rev 150:105197. https://doi.org/10.1016/j.neubiorev.2023.105197
    OpenUrlCrossRefPubMed
  26. ↵
    1. Gerloff C,
    2. Toro C,
    3. Uenishi N,
    4. Cohen LG,
    5. Leocani L,
    6. Hallett M
    (1997) Steady-state movement-related cortical potentials: a new approach to assessing cortical activity associated with fast repetitive finger movements. Electroencephalogr Clin Neurophysiol 102:106–113. https://doi.org/10.1016/S0921-884X(96)96039-7
    OpenUrlCrossRefPubMed
  27. ↵
    1. Goncharova II,
    2. McFarland DJ,
    3. Vaughan TM,
    4. Wolpaw JR
    (2003) EMG contamination of EEG: spectral and topographical characteristics. Clin Neurophysiol 114:1580–1593. https://doi.org/10.1016/S1388-2457(03)00093-2
    OpenUrlCrossRefPubMed
  28. ↵
    1. Gordon EM, et al.
    (2023) A somato-cognitive action network alternates with effector regions in motor cortex. Nature 617:351–359. https://doi.org/10.1038/s41586-023-05964-2 pmid:37076628
    OpenUrlCrossRefPubMed
  29. ↵
    1. Hallett M
    (1994) Movement-related cortical potentials. Electromyogr Clin Neurophysiol 34:5–13.
    OpenUrlPubMed
  30. ↵
    1. Jensen MA, et al.
    (2023) A motor association area in the depths of the central sulcus. Nat Neurosci 26:1165–1169. https://doi.org/10.1038/s41593-023-01346-z pmid:37202552
    OpenUrlCrossRefPubMed
  31. ↵
    1. Jessen S,
    2. Fiedler L,
    3. Münte TF,
    4. Obleser J
    (2019) Quantifying the individual auditory and visual brain response in 7-month-old infants watching a brief cartoon movie. Neuroimage 202:116060. https://doi.org/10.1016/j.neuroimage.2019.116060
    OpenUrlCrossRefPubMed
  32. ↵
    1. Jokisch D,
    2. Daum I,
    3. Suchan B,
    4. Troje NF
    (2005) Structural encoding and recognition of biological motion: evidence from event-related potentials and source analysis. Behav Brain Res 157:195–204. https://doi.org/10.1016/j.bbr.2004.06.025
    OpenUrlCrossRefPubMed
  33. ↵
    1. Kern P,
    2. Heilbron M,
    3. de Lange FP,
    4. Spaak E
    (2022) Cortical activity during naturalistic music listening reflects short-range predictions based on long-term experience (Obleser J, Büchel C, Sedley W, Doelling K, eds). Elife 11:e80935. https://doi.org/10.7554/eLife.80935 pmid:36562532
    OpenUrlCrossRefPubMed
  34. ↵
    1. Kornhuber HH,
    2. Deecke L
    (1965) Hirnpotentialänderungen bei Willkürbewegungen und passiven Bewegungen des Menschen: Bereitschaftspotential und reafferente Potentiale. Pflügers Arch 284:1–17. https://doi.org/10.1007/BF00412364
    OpenUrlCrossRef
  35. ↵
    1. Kothe CAE,
    2. Jung T-P
    (2016) Artifact removal techniques with signal reconstruction. Available at: https://patents.google.com/patent/US20160113587A1/en (Accessed May 16, 2024).
  36. ↵
    1. Koul A,
    2. Ahmar D,
    3. Iannetti GD,
    4. Novembre G
    (2023) Spontaneous dyadic behavior predicts the emergence of interpersonal neural synchrony. Neuroimage 277:120233. https://doi.org/10.1016/j.neuroimage.2023.120233
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kubová Z,
    2. Kuba M,
    3. Spekreijse H,
    4. Blakemore C
    (1995) Contrast dependence of motion-onset and pattern-reversal evoked potentials. Vision Res 35:197–205. https://doi.org/10.1016/0042-6989(94)00138-C
    OpenUrlCrossRefPubMed
  38. ↵
    1. Lalor EC,
    2. Power AJ,
    3. Reilly RB,
    4. Foxe JJ
    (2009) Resolving precise temporal processing properties of the auditory system using continuous stimuli. J Neurophysiol 102:349–359. https://doi.org/10.1152/jn.90896.2008
    OpenUrlCrossRefPubMed
  39. ↵
    1. Lanzarini F,
    2. Maranesi M,
    3. Rondoni EH,
    4. Albertini D,
    5. Ferretti E,
    6. Lanzilotto M,
    7. Micera S,
    8. Mazzoni A,
    9. Bonini L
    (2025) Neuroethology of natural actions in freely moving monkeys. Science 387:214–220. https://doi.org/10.1126/science.adq6510
    OpenUrlCrossRefPubMed
  40. ↵
    1. Lavie N
    (2005) Distracted and confused? Selective attention under load. Trends Cogn Sci 9:75–82. https://doi.org/10.1016/j.tics.2004.12.004
    OpenUrlCrossRefPubMed
  41. ↵
    1. Luck SJ
    (2014) An introduction to the event-related potential technique, Ed 2. Cambridge, Massachusetts: MIT Press.
  42. ↵
    1. Mao D,
    2. Avila E,
    3. Caziot B,
    4. Laurens J,
    5. Dickman JD,
    6. Angelaki DE
    (2021) Spatial modulation of hippocampal activity in freely moving macaques. Neuron 109:3521–3534.e6. https://doi.org/10.1016/j.neuron.2021.09.032 pmid:34644546
    OpenUrlCrossRefPubMed
  43. ↵
    1. Maris E,
    2. Oostenveld R
    (2007) Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods 164:177–190. https://doi.org/10.1016/j.jneumeth.2007.03.024
    OpenUrlCrossRefPubMed
  44. ↵
    1. Mithen SJ
    (2006) The singing Neanderthals: the origins of music, language, mind, and body. Cambridge, Massachusetts: Harvard University Press.
  45. ↵
    1. Molloy K,
    2. Griffiths TD,
    3. Chait M,
    4. Lavie N
    (2015) Inattentional deafness: visual load leads to time-specific suppression of auditory evoked responses. J Neurosci 35:16046–16054. https://doi.org/10.1523/JNEUROSCI.2931-15.2015 pmid:26658858
    OpenUrlAbstract/FREE Full Text
  46. ↵
    1. Musall S,
    2. Kaufman MT,
    3. Juavinett AL,
    4. Gluf S,
    5. Churchland AK
    (2019) Single-trial neural dynamics are dominated by richly varied movements. Nat Neurosci 22:1677–1686. https://doi.org/10.1038/s41593-019-0502-4 pmid:31551604
    OpenUrlCrossRefPubMed
  47. ↵
    1. Nguyen T,
    2. Bánki A,
    3. Markova G,
    4. Hoehl S
    (2020) Chapter 1: Studying parent-child interaction with hyperscanning. In: Progress in brain research. New perspectives on early social-cognitive development (Hunnius S, Meyer M, eds), pp 1–24. Amsterdam, Netherlands: Elsevier.
  48. ↵
    1. Nguyen T,
    2. Lagacé-Cusiac R,
    3. Everling JC,
    4. Henry MJ,
    5. Grahn JA
    (2024) Audiovisual integration of rhythm in musicians and dancers. Atten Percept Psychophys 86:1400–1416. https://doi.org/10.3758/s13414-024-02874-x
    OpenUrlCrossRefPubMed
  49. ↵
    1. Nguyen T,
    2. Reisner S,
    3. Lueger A,
    4. Wass SV,
    5. Hoehl S,
    6. Markova G
    (2023) Sing to me, baby: infants show neural tracking and rhythmic movements to live and dynamic maternal singing. Dev Cogn Neurosci 64:101313. https://doi.org/10.1016/j.dcn.2023.101313 pmid:37879243
    OpenUrlCrossRefPubMed
  50. ↵
    1. Nguyen T,
    2. Schleihauf H,
    3. Kayhan E,
    4. Matthes D,
    5. Vrticka P,
    6. Hoehl S
    (2021) Neural synchrony in mother–child conversation: exploring the role of conversation patterns. Soc Cogn Affect Neurosci 16:93–102. https://doi.org/10.1093/scan/nsaa079 pmid:32591781
    OpenUrlCrossRefPubMed
  51. ↵
    1. Niso G,
    2. Romero E,
    3. Moreau JT,
    4. Araujo A,
    5. Krol LR
    (2023) Wireless EEG: a survey of systems and studies. Neuroimage 269:119774. https://doi.org/10.1016/j.neuroimage.2022.119774
    OpenUrlCrossRefPubMed
  52. ↵
    1. Novembre G,
    2. Pawar VM,
    3. Bufacchi RJ,
    4. Kilintari M,
    5. Srinivasan M,
    6. Rothwell JC,
    7. Haggard P,
    8. Iannetti GD
    (2018) Saliency detection as a reactive process: unexpected sensory events evoke corticomuscular coupling. J Neurosci 38:2385–2397. https://doi.org/10.1523/JNEUROSCI.2474-17.2017 pmid:29378865
    OpenUrlAbstract/FREE Full Text
  53. ↵
    1. Novembre G,
    2. Sammler D,
    3. Keller PE
    (2016) Neural alpha oscillations index the balance between self-other integration and segregation in real-time joint action. Neuropsychologia 89:414–425. https://doi.org/10.1016/j.neuropsychologia.2016.07.027
    OpenUrlCrossRefPubMed
  54. ↵
    1. O’Neill GC, et al.
    (2024) Combining video telemetry and wearable MEG for naturalistic imaging. 2023.08.01.551482. Available at: https://www.biorxiv.org/content/10.1101/2023.08.01.551482v2 (Accessed December 16, 2024).
  55. ↵
    1. Oostenveld R,
    2. Fries P,
    3. Maris E,
    4. Schoffelen J-M
    (2011) Fieldtrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Intell Neurosci 2011:156869. https://doi.org/10.1155/2011/156869 pmid:21253357
    OpenUrlCrossRefPubMed
  56. ↵
    1. Orgs G,
    2. Vicary S,
    3. Sperling M,
    4. Richardson DC,
    5. Williams AL
    (2024) Movement synchrony among dance performers predicts brain synchrony among dance spectators. Sci Rep 14:22079. https://doi.org/10.1038/s41598-024-73438-0 pmid:39333777
    OpenUrlCrossRefPubMed
  57. ↵
    1. Osman A,
    2. Müller K-M,
    3. Syre P,
    4. Russ B
    (2005) Paradoxical lateralization of brain potentials during imagined foot movements. Brain Res Cogn Brain Res 24:727–731. https://doi.org/10.1016/j.cogbrainres.2005.04.004
    OpenUrlCrossRefPubMed
  58. ↵
    1. O’Sullivan AE,
    2. Crosse MJ,
    3. Di Liberto GM,
    4. Lalor EC
    (2017) Visual cortical entrainment to motion and categorical speech features during silent lipreading. Front Hum Neurosci 10:679. https://doi.org/10.3389/fnhum.2016.00679 pmid:28123363
    OpenUrlCrossRefPubMed
  59. ↵
    1. Pan Y,
    2. Novembre G,
    3. Song B,
    4. Li X,
    5. Hu Y
    (2018) Interpersonal synchronization of inferior frontal cortices tracks social interactive learning of a song. Neuroimage 183:280–290. https://doi.org/10.1016/j.neuroimage.2018.08.005
    OpenUrlCrossRefPubMed
  60. ↵
    1. Pion-Tonachini L,
    2. Kreutz-Delgado K,
    3. Makeig S
    (2019) ICLabel: an automated electroencephalographic independent component classifier, dataset, and website. Neuroimage 198:181–197. https://doi.org/10.1016/j.neuroimage.2019.05.026 pmid:31103785
    OpenUrlCrossRefPubMed
  61. ↵
    1. Plöchl M,
    2. Ossandón JP,
    3. König P
    (2012) Combining EEG and eye tracking: identification, characterization, and correction of eye movement artifacts in electroencephalographic data. Front Hum Neurosci 6:278. https://doi.org/10.3389/fnhum.2012.00278 pmid:23087632
    OpenUrlCrossRefPubMed
  62. ↵
    1. Puce A,
    2. Smith A,
    3. Allison T
    (2000) Erps evoked by viewing facial movements. Cogn Neuropsychol 17:221–239. https://doi.org/10.1080/026432900380580
    OpenUrlCrossRefPubMed
  63. ↵
    1. Ramírez-Moreno MA,
    2. Cruz-Garza JG,
    3. Acharya A,
    4. Chatufale G,
    5. Witt W,
    6. Gelok D,
    7. Reza G,
    8. Contreras-Vidal JL
    (2023) Brain-to-brain communication during musical improvisation: a performance case study. F1000Res 11:989. https://doi.org/10.12688/f1000research.123515.2 pmid:37809054
    OpenUrlPubMed
  64. ↵
    1. Rose MC,
    2. Styr B,
    3. Schmid TA,
    4. Elie JE,
    5. Yartsev MM
    (2021) Cortical representation of group social communication in bats. Science 374:eaba9584. https://doi.org/10.1126/science.aba9584 pmid:34672724
    OpenUrlCrossRefPubMed
  65. ↵
    1. Shibasaki H,
    2. Barrett G,
    3. Halliday E,
    4. Halliday AM
    (1980) Components of the movement-related cortical potential and their scalp topography. Electroencephalogr Clin Neurophysiol 49:213–226. https://doi.org/10.1016/0013-4694(80)90216-3
    OpenUrlCrossRefPubMed
  66. ↵
    1. Smulders FTY,
    2. Miller JO
    (2011) The lateralized readiness potential. In: The Oxford handbook of event-related potential components (Kappenman ES, Luck SJ, eds), pp 210–230. Oxford: Oxford University Press.
  67. ↵
    1. Somervail R,
    2. Zhang F,
    3. Novembre G,
    4. Bufacchi RJ,
    5. Guo Y,
    6. Crepaldi M,
    7. Hu L,
    8. Iannetti GD
    (2021) Waves of change: brain sensitivity to differential, not absolute, stimulus intensity is conserved across humans and rats. Cereb Cortex 31:949–960. https://doi.org/10.1093/cercor/bhaa267 pmid:33026425
    OpenUrlCrossRefPubMed
  68. ↵
    1. Stangl M,
    2. Maoz SL,
    3. Suthana N
    (2023) Mobile cognition: imaging the human brain in the ‘real world’. Nat Rev Neurosci 24:347–362. https://doi.org/10.1038/s41583-023-00692-y pmid:37046077
    OpenUrlCrossRefPubMed
  69. ↵
    1. Stringer C,
    2. Pachitariu M,
    3. Steinmetz N,
    4. Reddy CB,
    5. Carandini M,
    6. Harris KD
    (2019) Spontaneous behaviors drive multidimensional, brainwide activity. Science 364:eaav7893. https://doi.org/10.1126/science.aav7893 pmid:31000656
    OpenUrlAbstract/FREE Full Text
  70. ↵
    1. Testard C,
    2. Tremblay S,
    3. Parodi F,
    4. DiTullio RW,
    5. Acevedo-Ithier A,
    6. Gardiner KL,
    7. Kording K,
    8. Platt ML
    (2024) Neural signatures of natural behaviour in socializing macaques. Nature 628:381–390. https://doi.org/10.1038/s41586-024-07178-6
    OpenUrlCrossRefPubMed
  71. ↵
    1. Theofanopoulou C,
    2. Paez S,
    3. Huber D,
    4. Todd E,
    5. Ramírez-Moreno MA,
    6. Khaleghian B,
    7. Sánchez AM,
    8. Barceló L,
    9. Gand V,
    10. Contreras-Vidal JL
    (2024) Mobile brain imaging in butoh dancers: from rehearsals to public performance. BMC Neurosci 25:62. https://doi.org/10.1186/s12868-024-00864-1 pmid:39506628
    OpenUrlCrossRefPubMed
  72. ↵
    1. Toiviainen P,
    2. Luck G,
    3. Thompson MR
    (2010) Embodied meter: hierarchical eigenmodes in music-induced movement. Music Percept 28:59–70. https://doi.org/10.1525/mp.2010.28.1.59
    OpenUrlAbstract/FREE Full Text
  73. ↵
    1. Tremblay S,
    2. Testard C,
    3. DiTullio RW,
    4. Inchauspé J,
    5. Petrides M
    (2023) Neural cognitive signals during spontaneous movements in the macaque. Nat Neurosci 26:295–305. https://doi.org/10.1038/s41593-022-01220-4
    OpenUrlCrossRefPubMed
  74. ↵
    1. Troje NF
    (2002) Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. J Vis 2:371–387. https://doi.org/10.1167/2.5.2
    OpenUrlAbstract
  75. ↵
    1. Urigüen JA,
    2. Garcia-Zapirain B
    (2015) EEG artifact removal—state-of-the-art and guidelines. J Neural Eng 12:031001. https://doi.org/10.1088/1741-2560/12/3/031001
    OpenUrlCrossRefPubMed
  76. ↵
    1. Varlet M,
    2. Nozaradan S,
    3. Nijhuis P,
    4. Keller PE
    (2020) Neural tracking and integration of ‘self’ and ‘other’ in improvised interpersonal coordination. Neuroimage 206:116303. https://doi.org/10.1016/j.neuroimage.2019.116303
    OpenUrlCrossRefPubMed
  77. ↵
    1. Varlet M,
    2. Nozaradan S,
    3. Schmidt RC,
    4. Keller PE
    (2023) Neural tracking of visual periodic motion. Eur J Neurosci 57:1081–1097. https://doi.org/10.1111/ejn.15934
    OpenUrlCrossRefPubMed
  78. ↵
    1. Vercillo T,
    2. O’Neil S,
    3. Jiang F
    (2018) Action–effect contingency modulates the readiness potential. Neuroimage 183:273–279. https://doi.org/10.1016/j.neuroimage.2018.08.028 pmid:30114465
    OpenUrlCrossRefPubMed
  79. ↵
    1. Wass SV,
    2. Noreika V,
    3. Georgieva S,
    4. Clackson K,
    5. Brightman L,
    6. Nutbrown R,
    7. Covarrubias LS,
    8. Leong V
    (2018) Parental neural responsivity to infants’ visual attention: how mature brains influence immature brains during social interaction. PLoS Biol 16:e2006328. https://doi.org/10.1371/journal.pbio.2006328 pmid:30543622
    OpenUrlCrossRefPubMed
  80. ↵
    1. Wass SV,
    2. Whitehorn M,
    3. Haresign IM,
    4. Phillips E,
    5. Leong V
    (2020) Interpersonal neural entrainment during early social interaction. Trends Cogn Sci 24:329–342. https://doi.org/10.1016/j.tics.2020.01.006
    OpenUrlCrossRefPubMed
  81. ↵
    1. Weineck K,
    2. Wen OX,
    3. Henry MJ
    (2022) Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience (Jensen O, Shinn-Cunningham BG, Zoefel B, eds). Elife 11:e75515. https://doi.org/10.7554/eLife.75515 pmid:36094165
    OpenUrlCrossRefPubMed
  82. ↵
    1. Woods DL,
    2. Alho K,
    3. Algazi A
    (1992) Intermodal selective attention. I. Effects on event-related potentials to lateralized auditory and visual stimuli. Electroencephalogr Clin Neurophysiol 82:341–355. https://doi.org/10.1016/0013-4694(92)90004-2
    OpenUrlCrossRefPubMed
  83. ↵
    1. Yan Y,
    2. Goodman JM,
    3. Moore DD,
    4. Solla SA,
    5. Bensmaia SJ
    (2020) Unexpected complexity of everyday manual behaviors. Nat Commun 11:3564. https://doi.org/10.1038/s41467-020-17404-0 pmid:32678102
    OpenUrlCrossRefPubMed
  84. ↵
    1. Yang Y, et al.
    (2021) Wireless multilateral devices for optogenetic studies of individual and social behaviors. Nat Neurosci 24:1035–1045. https://doi.org/10.1038/s41593-021-00849-x pmid:33972800
    OpenUrlCrossRefPubMed
  85. ↵
    1. Zhang W,
    2. Yartsev MM
    (2019) Correlated neural activity across the brains of socially interacting bats. Cell 178:413–428.e22. https://doi.org/10.1016/j.cell.2019.05.023 pmid:31230710
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 45 (21)
Journal of Neuroscience
Vol. 45, Issue 21
21 May 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
EEG of the Dancing Brain: Decoding Sensory, Motor, and Social Processes during Dyadic Dance
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
EEG of the Dancing Brain: Decoding Sensory, Motor, and Social Processes during Dyadic Dance
Félix Bigand, Roberta Bianco, Sara F. Abalde, Trinh Nguyen, Giacomo Novembre
Journal of Neuroscience 21 May 2025, 45 (21) e2372242025; DOI: 10.1523/JNEUROSCI.2372-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
EEG of the Dancing Brain: Decoding Sensory, Motor, and Social Processes during Dyadic Dance
Félix Bigand, Roberta Bianco, Sara F. Abalde, Trinh Nguyen, Giacomo Novembre
Journal of Neuroscience 21 May 2025, 45 (21) e2372242025; DOI: 10.1523/JNEUROSCI.2372-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Data Availability
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • dance
  • electroencephalography (EEG)
  • full-body kinematics
  • multivariate modeling
  • real-world behavior
  • sensorimotor processing
  • social coordination
  • spontaneous movement
  • temporal response function (TRF)

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Regional Excitatory-Inhibitory Balance Relates to Self-Reference Effect on Recollection via the Precuneus/Posterior Cingulate Cortex–Medial Prefrontal Cortex Connectivity
  • Modulation of dopamine neurons alters behavior and event encoding in the nucleus accumbens during Pavlovian conditioning
  • Hippocampal sharp-wave ripples decrease during physical actions including consummatory behavior in immobile rodents
Show more Research Articles

Behavioral/Cognitive

  • Regional Excitatory-Inhibitory Balance Relates to Self-Reference Effect on Recollection via the Precuneus/Posterior Cingulate Cortex–Medial Prefrontal Cortex Connectivity
  • Neocortical and Hippocampal Theta Oscillations Track Audiovisual Integration and Replay of Speech Memories
  • Anchoring functional connectivity to individual sulcal morphology yields insights in a pediatric study of reasoning
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.