Abstract
A growing number of social interactions are taking place virtually on videoconferencing platforms. Here, we explore potential effects of virtual interactions on observed behavior, subjective experience, and neural “single-brain” and “interbrain” activity via functional near-infrared spectroscopy neuroimaging. We scanned a total of 36 human dyads (72 participants, 36 males, 36 females) who engaged in three naturalistic tasks (i.e., problem-solving, creative-innovation, socio-emotional task) in either an in-person or virtual (Zoom) condition. We also coded cooperative behavior from audio recordings. We observed reduced conversational turn-taking behavior during the virtual condition. Given that conversational turn-taking was associated with other metrics of positive social interaction (e.g., subjective cooperation and task performance), this measure may be an indicator of prosocial interaction. In addition, we observed altered patterns of averaged and dynamic interbrain coherence in virtual interactions. Interbrain coherence patterns that were characteristic of the virtual condition were associated with reduced conversational turn-taking. These insights can inform the design and engineering of the next generation of videoconferencing technology.
SIGNIFICANCE STATEMENT Videoconferencing has become an integral part of our lives. Whether this technology impacts behavior and neurobiology is not well understood. We explored potential effects of virtual interaction on social behavior, brain activity, and interbrain coupling. We found that virtual interactions were characterized by patterns of interbrain coupling that were negatively implicated in cooperation. Our findings are consistent with the perspective that videoconferencing technology adversely affects individuals and dyads during social interaction. As virtual interactions become even more necessary, improving the design of videoconferencing technology will be crucial for supporting effective communication.
- cooperation
- functional near-infrared spectroscopy (fNIRS)
- hyperscanning
- social neuroscience
- virtual interaction
Introduction
Videoconferencing technology is increasingly integral to business (Bloom et al., 2021), education (Lowenthal et al., 2020), and health care (Feijt et al., 2020). A growing number of empirical studies have considered the potential effects of this technology on well being and on the quality and nature of social interaction across different contexts. For example, some recent research has found that transmission delays during videoconferencing disrupt the natural rhythm of a conversation, leading to delays in turn initiation (Boland et al., 2022). In addition, there is both anecdotal and empirical evidence that prolonged virtual interactions lead to feelings of exhaustion and irritation, a phenomenon commonly referred to as “Zoom fatigue” (Bailenson, 2021; BBC, 2021; Fauville et al., 2021; Nesher Shoshan and Wehrt, 2022). Zoom fatigue could be caused by increased cognitive effort related to heightened self-focused attention, the experience of looking at a grid of staring faces with increased eye contact, the inhibition of body movement, and the production and interpretation of unnatural nonverbal cues such as nodding in an exaggerated way (Bailenson, 2021). Neuroimaging-based measures of cognitive effort may advance our understanding of the mechanistic effects of videoconferencing technology on social interaction. Specifically, we propose that functional near-infrared spectroscopy (fNIRS) is a potentially informative tool to investigate neural and interbrain differences between virtual and in-person interactions (Balters et al., 2020).
fNIRS is an optical brain-imaging modality that measures changes in cortical oxygenation that occur when regions of the cortex become more active (Cui et al., 2010; Strangman et al., 2002). Importantly, increased cortical oxygenation, especially in prefrontal areas, has been used as a neural measure of cognitive load and effort (Argyle et al., 2021; Causse et al., 2017; Fishburn et al., 2014). Recent fNIRS research has extended the assessment of single-brain functioning to multibrain “hyperscanning” experiments to gain insight into neural activity across interacting individuals (Cui et al., 2012; Funane et al., 2011). Researchers have increasingly considered when and how neural processes become synchronized, and how interbrain coherence (IBC; correlation of cortical activity between brains) relate to behavioral measures (Babiloni and Astolfi, 2014; Balters et al., 2020; Czeszumski et al., 2020).
Shared attention and joint goal-directed behaviors may be important for the occurrence of IBC. Prior studies have found increased IBC when dyads completed a task together, in contrast to completing the identical task (simultaneously) alone (Liu et al., 2016; Hu et al., 2017; Fishburn et al., 2018; Feng et al., 2020; Zhang et al., 2021; Zhou et al., 2022b), and when cooperating, in contrast to when competing with one another (Cui et al., 2012; Lu et al., 2019). Increased IBC has also been observed in other prosocial contexts including in-group bonding (Yang et al., 2020), empathy (Bembich et al., 2022), and after gift exchanges (Balconi et al., 2019; Balconi and Fronda, 2020). The majority of hyperscanning studies have used task averaging to assess the mean levels of IBC at the region-specific level (Babiloni and Astolfi, 2014; Wang et al., 2018; Balters et al., 2020; Czeszumski et al., 2020). One limitation of “averaged” measures is that they do not consider changes in IBC that reflect the dynamic nature of social interaction. Thus, researchers have recently proposed a novel analytical approach that captures dynamic changes in IBC [“dynamic IBC” (dIBC); Li et al., 2021; Wang et al., 2022; Zhou et al., 2022a]. To date, studies have not considered the effects of virtual interaction on average or dIBC.
In the current study, dyads performed a problem-solving task, a creative-innovation task, and a socio-emotional task. Across these tasks, we explored the following: (1) the potential differences between in-person and virtual interactions (Fig. 1) with respect to observed behavior, subjective experience, brain functioning, and (averaged and dynamic) IBC; and (2) potential associations among these measures.
Figure 1-1
ClearMask transparent surgical mask. We used this transparent, anti-fog, and Food and Drug Administration-approved face mask in both experimental conditions (in-person and virtual interaction). Download Figure 1-1, TIF file.
Materials and Methods
The experimental methodology was approved by the Stanford Institutional Review Board (IRB Approval #18160) and followed COVID-19 regulations for human experimentation as defined by the Stanford University School of Medicine.
Experimental design
Participants
A total of 72 adults participated in the study (N = 36 female, N = 36 male; mean age, 27.11 years; SD, 7.57 years). All participants were healthy, right handed, and had normal or corrected-to-normal hearing and vision. The study followed a between-subject design, and we randomly assigned dyads to either an in-person or virtual interaction condition. Participants were blind to the aim of the research study (i.e., assessing differences between in-person and virtual interactions). We matched dyads based on sex composition. Both conditions included six female–female, six male–male, and six female–male dyads. Participants in the same dyad were previously unacquainted. We recruited participants through local advertisement on flyers, E-mail lists, and social media. We obtained written consent and compensated participants with an Amazon gift card [$25 (US) per hour]. The experimental procedure lasted up to 3 h.
Experimental procedure
In the in-person condition, dyad partners sat face to face at the opposite side of a squared table. In compliance with COVID-19 regulations, participants kept 9 feet (∼2.74 m) distance and wore clear “anti-fog” face masks that were approved by the US Food and Drug Administration (Extended Data Fig. 1-1). In the virtual condition, dyad partners sat at desks in two separate and auditorily disconnected rooms and interacted over videoconferencing using Zoom (Zoom Video Communications). To eliminate the potential confound of wearing a mask in only one condition, participants of the virtual condition also wore clear face masks throughout the experiment. We used two identical laptops for the virtual interaction (Yoga 730–730-15IKB, Lenovo; screen size, 15.6 inches) and set Zoom settings to full screen without self-view windows. We positioned the laptops to approximately match the facial proportions of the in-person condition. We attached fNIRS caps while participants were in the same room or after the Zoom connection was already established. We instructed participants to not talk to one another during that time. Before starting the experiment, participants had 3 min to introduce themselves to one another. Participants were alone in the room and received task instructions via audio prompts. Dyads completed three experimental tasks. They filled out subjective experience questionnaires before the first task and subsequently after each task. We collected audio and video recordings of the participants, with four portable video cameras capturing front and side views of the participants. After the experiment, each participant completed additional assessments in separate rooms to assess personality traits, creative ability, and familiarity with virtual (Zoom) interactions.
Experimental tasks
We designed the experiment to explore potential impacts of virtual interactions on observed behavior, subjective experience, and neural mechanisms across diverse interaction tasks intended to simulate collaborative activities that are important in workplace environments. Considering diverse collaborative activities allows us to assess whether there are common or context-specific effects of virtual interaction on human (brain) behavior. Thus, we selected the following three different naturalistic collaboration tasks: (1) a problem-solving task; (2) a creative-innovation task; and (3) a socio-emotional task. Inclusion of the three tasks facilitated analysis of both general (i.e., across tasks) and task-specific effects of the virtual interaction condition on selected outcomes (i.e., observed behavior, subjective experience, and single-brain and interbrain activity). Dyads collaborated on each task for 8 min without interruptions. The task order was randomized across participants, and we separated tasks by a 2 min calming video of a beach to minimize carryover effects across tasks.
The problem-solving task resembled a naturalistic team discussion in industry that requires reasoning, rationalizing, and consensus decision-making. In this task, dyads were asked to solve problems related to traffic rules. One of the study inclusion criteria was therefore possession of a valid US driver's license. We asked participants to collaborate and identify the four most important traffic rules that enhance safety on US highways. To emphasize the importance of each rule, participants had to describe how a chosen rule enhances safety on US highways and why the rule was more important than other traffic rules. We instructed the dyads to write down their rules and safety justification after completion of the task. We administered a creative-innovation task influenced by “design thinking” practices (Mayseless et al., 2019). The task resembled a creative brainstorming session in a research and development unit. We asked participants to collaborate and design a solution that could increase water conservation in the highest possible number of households in California. The solution could take any form (e.g., product, process, campaign). We instructed dyads to write down their solution after completion of the task. For the socio-emotional task, we asked participants to engage in a modified version of a nonviolent communication (NVC) exercise that is used to increase interpersonal closeness between individuals (Rosenberg and Chopra, 2015). We provided participants with a NVC list of “Needs We All Have” (Rosenberg and Chopra, 2015; i.e., we selected the 10 needs within the “connectedness” section) and asked participants to collaborate and identify four needs from the list that were most meaningful to them. To emphasize the importance of each need, participants had to describe a situation from their life when that need was not met and how it made them feel. We instructed the partner to actively listen, to acknowledge the feelings of the one who shared and to describe why the need was also meaningful to them. We instructed dyads that they would have to describe their list of needs to an experimenter after completion of the task.
Behavioral assessments
We captured video and audio recordings of participants during the experiment and derived two behavioral assessments based on audio recording, including conversational turn-taking and observed performance.
Conversational turn-taking.
From the audio recordings, we counted the number of turn-taking events between dyad partners for each task.
Observed performance.
For the problem-solving task, two researchers (author S.B. and a trained research assistant) rated the level of obtained safety, individually for each of the four traffic rules on a 5-point scale ranging from 1 (“very low”) to 5 (“very high”). We averaged the scores across the four rules to generate the final problem-solving performance score. Inter-rater reliability index was good [interclass correlation coefficient (ICC) = 0.803]. We repeated the procedure for the creative-innovation task, and two researchers rated the level of creative-innovation as a performance rating. Both raters had expertise in assessing creative-innovation according to the Stanford Design School principles (authors S.B. and G.H.). Specifically, we assessed the creative-innovation rating based on four subscales, including “fluency” (i.e., total number of design elements), “originality” (i.e., statistical rarity of the response across answers in this study), “elaboration” (i.e., level of imagination and exposition of detail), and “accountability” (i.e., effectiveness of water conservation) as derived from the study by Torrance (1974). We obtained the ratings on a 5-point scale ranging from 1 (very low) to 5 (very high) and averaged the scores from these four subscales to get the final creative-innovation performance score. Inter-rater reliability index was good (ICC = 0.855). For the socio-emotional task, two researchers (author S.B. and a trained research assistant) blindly rated the performance by listening to the conversations through recorded audio material. To blind the audio files, we first applied an audio effect filter to the recordings (i.e., https://voicechanger.io/; “old radio” filter; distortion = 55; effect = 1). We blindly rated performance based on (1) the level of vulnerability and openness and (2) the level of emphatic response for each dyad partner from 1 (very low) to 5 (very high). We calculated the average value across all four scores as final performance score. Inter-rater reliability index was good (ICC = 0.827).
Subjective experience assessments
Questionnaires captured the subjective experience of participants during the experiment, including affective responses, subjective cooperation index, subjective performance index, and interpersonal closeness index.
Affective responses.
Participants completed the Affect Grid Survey (Russell, 1980) to inquire about their level of arousal and level of valence on a 9-point Likert scale ranging from “sleepy” to “energized” and “unpleasant” to “pleasant,” respectively. Self-reported levels of stress were obtained via the Perceived Stress Scale (Cohen et al., 1997), using a 9-point Likert scale ranging from “low” to “high.”
Subjective cooperation index.
Participants rated the overall cooperation of the team on a 9-point Likert scale ranging from “not at all cooperative” to “extremely cooperative.”
Subjective performance index.
Participants also rated their subjective performance of the team for each task session on a 9-point Likert scale ranging from “not at all” to “extremely.”
Interpersonal closeness index.
Participants rated subjective sense of closeness toward their dyad partner (i.e., “interpersonal closeness”; Tarr et al., 2015) on five 7-point Likert subscales, including questions about connectedness and trust (Wiltermuth and Heath, 2009), an adapted version of the inclusion of other in self scale (Aron et al., 1992), likeability (Hove and Risen, 2009), and similarity in personality (Valdesolo and DeSteno, 2011). The final score was produced by averaging the subscores.
Postexperimental assessments
To assess the similarity of the two interaction conditions, we assessed participants' personality traits, creative ability, and prior Zoom familiarity after the experiment in two separate rooms.
Personality traits.
Participants completed the NEO Five-Factor Inventory-3 (FFI-3) survey (McCrae and Costa, 2007), the Adult Attachment Scale Survey (Collins and Read, 1990), and the Emotional Intelligence Survey by Wong and Law (2002) to capture personality traits. For the NEO-FFI-3 survey, we calculated T-scores for all five subscales, including Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness. For the Adult Attachment Scale, we calculated the Avoidant and Anxious subscales. For the Emotional Intelligence score, we calculated the average of the four subscores: self-emotional appraisal, other's emotion appraisal, use of emotion, and regulation of emotion.
Creative ability.
We further applied the alternate uses task (AUT) to assess individual levels of divergent thinking and creativity (Guilford, 1967). We presented pictures of four common objects (i.e., a pillow, key, toothbrush, and ceramic plate) to the participants that also displayed each object's name and the most common everyday use case. Participants had to list as many alternative uses as possible for each object for a duration of 8 min, while being as creative as possible. We presented participants with an example of alternate uses for a paper cup before the task. We only included responses that did not replicate the common uses of the objects. Scoring included “fluency” (number of responses) and “originality” (statistical rarity of the response across answers in this study) following procedures as described in the study by Torrance (1974), with final scores being the average score of all items.
Zoom videoconferencing familiarity.
Participants rated their prior experience and proficiency with Zoom videoconferencing on a 5-point Likert scale ranging from “not at all” to “extremely.”
fNIRS data acquisition and preprocessing
We recorded cortical hemodynamic activity of each participant using a continuous-wave fNIRS system (NIRSport2 System, NIRX) with two wavelengths (760 and 850 mm) and a sampling frequency of 10.2 Hz. We divided a total of 128 optodes (64 sources × 64 detectors) between the two participants resulting in 100 measurement channels per participant that were placed over the entire cortex according to the international 10–20 EEG placement system (Fig. 2a). We further collected head motion data with a system-embedded gyroscope in the Cz position at 10.2 Hz [i.e., three-dimensional accelerometer data (in m/s2) and three-dimensional rotation data (°/s)]. The system connected via network and allowed time synchronization between two spatially separated systems.
Figure 2-1
Overview of the ROI averaging procedure. We calculated IBC between each ROI and the rest of ROIs on the converted HbO time series (a total of 1024 combinations: 32 ROIs × 32 ROIs). We then averaged the IBC between the same ROI pairings. For example, the IBC value of ROI 2 (participant 1) and ROI 3 (participant 2) was averaged with the IBC value of ROI 2 (participant 2) and ROI 3 (participant 1). This procedure resulted in 528 IBC pairings. The individual cells show all possible ROI pairings. Pairings involving the same ROI in both partners (e.g., ROI 1 of participant 1 and ROI 1 of participant 2) are marked in gray across the diagonal. Download Figure 2-1, TIF file.
We analyzed the raw fNIRS data using the NIRS Brain AnalyzIR Toolbox (Santosa et al., 2018) in MATLAB version R2021a (MathWorks). We also assessed data quality via the scalp coupling index (SCI; Pollonini et al., 2016). Results showed that an average of 74.13% of the data had an SCI of >0.8. We excluded the channels with excessive noise (i.e., SCI ≤ 0.8) from subsequent analyses and converted remaining raw data to optical density data. We then applied a fourth-order Butterworth bandpass filter (0.02–0.2 Hz) to eliminate artifacts such as cardiac interference and corrected for motion artifacts using a wavelet motion correction procedure (Molavi and Dumont, 2012). We then transformed data into concentration changes of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR) according to the Modified Beer–Lambert Law (Wyatt et al., 1986). Since HbO and HbR data are relative values, we converted the resulting data to z scores. We created 32 ROIs via source localization (Huppert et al., 2017; Tremblay et al., 2018) by averaging all channels that shared a common fNIRS source (Fig. 2b). To derive the center location of each ROI, we averaged across the Montreal Neurologic Institute (MNI) coordinates of the channels of each ROI and projected all ROIs onto the cortical surface using an automatic anatomic labeling method (Lancaster et al., 2000; Singh et al., 2005; Extended Data Table 2-1). For the single-brain analysis, we calculated the average HbO and HbR values across the duration of each task. Because HbO measures are known to be more robust and sensitive to task-associated changes compared with HbR measures (Plichta et al., 2006; Ferrari and Quaresima, 2012), we only use HbO data for further analyses, as is common in fNIRS hyperscanning research (Balters et al., 2020; Yang et al., 2020; Zhou et al., 2022a,b).
Table 2-1
Description of ROIs with Talairach Atlas labels. Download Table 2-1, DOCX file.
Averaged interbrain coherence analysis
We used wavelet transform coherence (WTC; Cui et al., 2012) analysis to assess averaged IBC (i.e., the similarity between the NIRS signals of dyad partners). Specifically, we used the wavelet coherence function (“wcoherence”) in MATLAB 2021a (MathWorks). We calculated wavelet coherence values between each ROI and the rest of ROIs on the converted HbO time series data (a total of 1024 combinations: 32 ROIs × 32 ROIs). While earlier studies investigated IBC between the same ROIs only (i.e., region A–region A; Cui et al., 2012; Jiang et al., 2015), more recent studies assessed IBC across all possible ROI pair combinations. These latter studies show that social interactions not only induce IBC between the same ROIs but also across different ROIs (Lu et al., 2019; Mayseless et al., 2019; Li et al., 2021). Here, we use the second approach and averaged the IBC between the same ROI pairs, resulting in 528 ROI pairs (Extended Data Fig. 2-1). Thereafter, we converted the averaged coherence values to Fisher z-statistics. Following recent approaches in the field (Lu et al., 2020, 2021, 2022; Park et al., 2022; Zhou et al., 2022b), we identified frequency bands of interest that were specifically associated with condition (i.e., in-person vs virtual) and/or task, instead of averaging across an entire predefined frequency band. For each ROI pair and each task separately, we ran a series of independent t tests between the conditions (i.e., virtual vs in-person) across all frequency increments within the range 0.01–0.2 Hz (i.e., 53 frequencies). We did not consider data >0.2 Hz to avoid systemic noise related to heart rate (0.6–2 Hz) and respiration (0.2–0.3 Hz), and further excluded data <0.01 Hz to remove low-frequency fluctuations (Molavi and Dumont, 2012). We applied false discovery rate (FDR) correction (p < 0.05) on the resulting p values (i.e., 53 tests per ROI pair and task) to adjust for repeated testing within each ROI pair. For subsequent analyses, we used IBC values of those frequency increments that showed statistically significant differences between the conditions. If an ROI pair had several significant frequency increments, we averaged IBC values across these increments. A total of 50 ROI pairs exhibited significant frequency increments. We only conducted further analyses using ROI pairs with a minimum of 50% useable data across participants; this resulted in 20 eligible ROI pairs. More conservative thresholds (i.e., 60%, 70%, and 80%) resulted in a considerable reduction in eligible ROI pairs (i.e., 12, 11, and 5 ROI pairs, respectively). Given the exploratory nature of this study, we used a 50% useable data threshold to provide more extensive brain coverage than more conservative thresholds would have provided us.
Dynamic interbrain coherence state analysis
We adopted a dynamic functional connectivity approach (Li et al., 2021; Wang et al., 2022; Zhou et al., 2022a) to characterize the dIBC states during each task. We conducted the analysis independently for each task, using the data from the 20 eligible ROI pairs. Because dIBC states are derived through an averaging approach across all dyads, we included only ROI pairs that had at least 50% of the data to improve the quality of this analysis. A total of 16 ROI pairs were eligible for this analysis. The dIBC approach uses a sliding window to segment a WTC-computed IBC matrix across the entire task duration for each dyad. We set the window size to 30 s and shifted in an increment of 1 s along the task duration. We averaged all IBC matrices within each time window to obtain an averaged IBC matrix. We then segmented the 8 min measurement duration into a series of windowed IBC matrices (16 ROIs × 16 ROIs × 581 windows) for each dyad and used a k-means clustering approach to cluster the condition-concatenated, time-varying windowed IBC matrices into distinct dIBC states across the task period. In other words, a “dIBC state” represents a dyad-level subnetwork including multiple IBC pairs at a certain time window. We determined the number of clusters using the elbow criterion of the cluster cost, computed as the ratio between the within-cluster distance to the between-cluster distance (Allen et al., 2014; Fang et al., 2020). Following the previous approach (Li et al., 2021), we characterized the properties of dIBC states by the occurrence rate of each dIBC state. The occurrence rate of a state is defined as the percentage of the total duration a dIBC state occurs within the entire task period (Allen et al., 2014).
Statistical analyses
Individual difference variables
We used IBM SPSS Statistics version 26 for all statistical analyses. We ran a series of independent t tests to assess whether there were differences in age, interdyad age differences, personality traits (i.e., NEO-FFI-3 T scores, adult attachment style, emotion intelligence), creative ability (i.e., AUT fluency, AUT originality), and familiarity with Zoom videoconferencing (i.e., experience and proficiency) between the two conditions (i.e., virtual vs in-person interaction).
Virtual interaction effects on observed behavior and subjective experience
Similar to the single-brain analyses, a second overarching aim was to determine whether there were general effects (i.e., across all tasks) or task-specific effects of condition on observed behaviors (i.e., conversational turn-taking and observed performance) and subjective experience measures (i.e., arousal, valence, stress, subjective cooperation, subjective performance, and interpersonal closeness). For each of these eight measures, we therefore conducted a two-way ANOVA to determine the effect of condition (i.e., virtual/in-person interaction) and task (i.e., problem-solving/creative-innovation/socio-emotional task).
fNIRS data quality
We first inspected the gyroscope data to assess whether participants in the two conditions (i.e., in-person vs virtual interaction) exhibited different head motions that could contaminate fNIRS data analyses. For each individual, we calculated the average acceleration across x-, y-, and z-directions and average rotation across x-, y-, and z-axes for each task. We then averaged the resulting values across the three experimental tasks (i.e., problem-solving, creative-innovation, and socio-emotional) and executed independent t-tests for acceleration and rotation data between conditions. We also assessed data quality via the SCI (Pollonini et al., 2016). Data showed that an average of 74.13% of the data had an SCI >0.8 (in-person group: mean = 74.97%; SD = 18.07%; virtual group: mean = 73.28%; SD = 18.40). We executed an independent t-test to assess whether the SCI values were different between the two conditions.
Single-brain analyses
One overall goal of the study was to assess whether there were general effects (i.e., across all tasks) or task-specific effects of condition on single-brain activity (i.e., averaged HbO response over a given task). Instead of treating each of the 32 ROIs independently, we clustered the 32 ROIs into the six functional areas of the human cortex (Yeo et al., 2011), including the prefrontal area (10 ROIs), premotor/supplementary motor area (4 ROIs), primary motor area (2 ROIs), temporal area (4 ROIs), parietal area (6 ROIs), and occipital area (6 ROIs). For each of the functional areas, we conducted two-factor, mixed-design multivariate ANOVAS (MANOVAs) in which the condition (i.e., virtual vs in-person) was the between-subjects factor and the task was the within-subject factor (i.e., all subjects performed all three tasks). Each MANOVA assessed the effect of condition, task, and condition by task interaction, on overall activation (i.e., cortical activation across the ROIs of the same functional area).
Averaged interbrain coherence analysis
Our third overarching aim was to assess whether there were general effects (i.e., across all tasks) or task-specific effects of condition on averaged IBC. Similar to the single-brain analyses, we clustered the 20 ROI pairs into functional area groups. Each group comprised ROI pairs from the same functional areas, including six prefrontal–prefrontal ROI pairs (group 1), four prefrontal–premotor/primary motor ROI pairs (group 2), five prefrontal–temporal/parietal ROI pairs (group 3), and five temporal–primary motor/parietal ROI pairs (group 4). For each of the four functional area groups, we conducted two-factor mixed-design MANOVAs with condition as the between-subjects factor and task as the within-subject factor to examine the effect of condition and/or task on the combined dependent variables (i.e., IBC across the ROI pairs of the same group).
dIBC states analysis
Last, we assessed the potential impact of condition on dIBC states. The k-means cluster analyses identified four dIBC states for each task along with their corresponding occurrence rates across time. We assessed whether there were different occurrence rates of dIBC states between the conditions. For each task, we conducted a series of independent t tests to determine whether there were different occurrence rates between the conditions across brain states. We FDR corrected the resulting p values for multiple analyses (i.e., four tests per task).
Data availability
Data available on request due to privacy/ethical restrictions.
Results
Conditions were matched on individual difference variables
As shown in Table 1, subject groups participating in the in-person and virtual conditions were not significantly different from one another for any of the measured variables. Subjects participating in the two were matched on age, interdyad age differences, personality traits (i.e., NEO-FFI-3 T scores, adult attachment style, emotional intelligence), creative ability (i.e., AUT fluency, AUT originality), and familiarity with Zoom videoconferencing (i.e., experience and proficiency).
Virtual interaction effects on observed behavior (reduction in conversational turn-taking)
We identified the main effects of condition on conversational turn-taking (F(1,70) = 39.790, p < 0.001, partial η2 = 0.362; Fig. 3a). Regardless of the task, we observed less turn-taking (mean difference = –37.574; 95% CI, –49.454 to –25.694) in the virtual condition compared to the in-person condition. In contrast, there were no significant interaction effects or main effects of condition for observed performance nor any of the subjective experience measures (i.e., arousal, valence, stress, subjective cooperation, subjective performance, and interpersonal closeness). We also identified the main effects of task on conversational turn-taking and interpersonal closeness, which we present in Extended Data Figure 3-1.
Figure 3-1
a, b, Main effects of task on conversational turn-taking (a) and interpersonal closeness (b). Here, we present the main effects of task on observed behavior (i.e., conversational turn-taking and observed performance) and subjective experience metrics (i.e., arousal, valence, stress, subjective cooperation, subjective performance, and interpersonal closeness). A two-way repeated-measures ANOVA showed a main effect of task on conversational turn-taking (F(2,69) = 25.506, p < 0.001, partial η2 = 0.452; Extended Data Fig. 1-1a) and interpersonal closeness (F(2,69) = 13.037, p < 0.001, partial η2 = 0.274; Extended Data Fig. 1-1b). Post hoc analyses with FDR correction showed that the number of conversational turn-taking events was higher in the problem-solving task compared with the creative-innovation task (mean difference = 6.583; 95% CI, 1.248–11.919; p = 0.016; Cohen's d = 0.198) and socioemotional task (mean difference = 19.583, 95% CI, 14.061–25.105, p < 0.001, Cohen's d = 0.588). In addition, conversational turn-taking was higher in the creative-innovation task compared with the socioemotional task (mean difference = 13.000, 95% CI, 7.611–18.389, p < 0.001, Cohen's d = 0.198). Post hoc analysis with FDR correction also showed that interpersonal closeness was higher in the socioemotional task compared with the problem-solving task (mean difference = 0.236, 95% CI, 0.122–0.350, p < 0.001, Cohen's d = 0.262) and creative-innovation task (mean difference = 0.347, 95% CI, 0.201–0.493, p < 0.001, Cohen's d = 0.376). There was no difference in interpersonal closeness between the problem-solving and creative-innovation tasks (p = 0.195). No other observed behavior and subjective experience metrics showed a main effect of task. Statistically significant differences at FDR-corrected p < 0.05 are indicated by the asterisk symbol (*). Download Figure 3-1, TIF file.
We used secondary, exploratory analyses to obtain information about the cooperative nature of variations in turn-taking. Specifically, we assessed correlations between conversational turn-taking and all other observed behavior and subjective experience measures across all three tasks. We corrected the resulting p values for multiple analyses (i.e., seven correlation analyses per measure) using FDR correction (p < 0.05). Subsequent correlation analyses revealed that turn-taking was positively correlated with subjective cooperation (r = 0.201, p = 0.021), subjective performance (r = 0.200, p = 0.011), and observed performance (r = 0.192, p = 0.010). Scatterplots of these associations are presented in Figure 3b. Subsequently, we ran additional univariate GLMs to assess whether the associations of conversational turn-taking with subjective cooperation and with subjective and observed performance were primarily driven by the virtual or the in-person condition. The association between conversational turn-taking and observed performance was significantly moderated by condition (F(1,153) = 6.639, p = 0.011) such that these variables were positively correlated in the virtual condition (r = 0.374, p = 0.001) but were not significantly related in the in-person condition (p = 0.940). The other association tests were not significant (p > 0.248).
In-person and virtual conditions did not differ in terms of fNIRS data quality
Independent t test showed no significant differences between the two conditions for acceleration (p = 0.613) and rotation data (p = 0.409; Table 2). There were no significant differences in SCI scores between the two groups, as assessed with an independent t test (p = 0.695). Thus, the in-person and virtual interaction conditions did not differ in terms of data quality.
Cortical activation did not differ between the conditions (single-brain analyses)
We did not observe significant main effects of condition or task, or an interaction effect of condition by task for any of the functional areas.
Virtual interaction affects averaged IBC
We observed a statistically significant multivariate interaction between condition and task on the combined ROI pairs for group 1 (F(12,15) = 2.781, p = 0.032, Wilks' Λ = 0.310, partial η2 = 0.690). There were no main or interaction effects of condition and task for the three other functional area groups.
Follow-up analyses for functional area group 1 with FDR correction for multiple comparisons showed a univariate interaction effect between condition and task for the left dorsal frontopolar area (dFPA)–left dFPA ROI pair (F(2,52) = 7.553, p = 0.001, partial η2 = 0.225), the right dFPA–left dFPA ROI pair (F(2,52) = 6.815, p = 0.002, partial η2 = 0.208), and the right dFPA–right dFPA ROI pair (F(2,52) = 7.273, p = 0.002, partial η2 = 0.219). If we identified significant multivariate effects, we inspected follow-up two-way ANOVAs to understand the nature of the significant interaction between condition and task for these three ROI pairs. We corrected the resulting p values for multiple testing across the ROI pairs of the same group (i.e., functional area) using FDR correction (p < 0.05). We found a higher averaged IBC in the virtual condition compared with the in-person condition during the problem-solving task for the left dFPA–left dFPA ROI pair (mean difference = 0.073; 95% CI, 0.027–0.118, p = 0.009, Cohen's d = 1.330); and during the creative-innovation task for the right dFPA–left dFPA ROI pair (mean difference = 0.063; 95% CI, 0.027–0.099; p = 0.003; Cohen's d = 1.350; Fig. 4a). In contrast, we identified lower averaged IBC for the virtual condition compared with the in-person interactions during the socio-emotional task for the right dFPA−right dFPA ROI pair (mean difference = −0.084; 95% CI, −0.123 to −0.045; p < 0.001; Cohen's d = 1.178).
For the three ROI pairs that showed significant univariate effects, we conducted separate correlation analyses for each task. Specifically, we correlated the IBC value for each ROI pair with the observed behavior and subjective experience measures and FDR corrected the resulting p values for multiple analyses (i.e., three correlation analyses per observed behavior/subjective experience measure). Correlation analyses revealed statistically significant associations between averaged IBC in the right dFPA−right dFPA ROI pair and turn-taking (r = 0.450, p = 0.021; Fig. 4b). The associations between turn-taking and averaged IBC in the left dFPA−left dFPA ROI pair (r = −0.330, uncorrected p = 0.067) did not pass the statistical significance threshold. We ran additional GLM analyses to assess whether the association between averaged IBC in the right dFPA−right dFPA ROI pair and turn-taking differed by condition. The two-way interaction effects involving condition were not statistically different (p = 0.216). Thus, we did not find evidence that associations of turn-taking and IBC were different in the virtual versus in-person condition.
Virtual interaction affects dIBC states
We observed differences by condition for the problem-solving task (Fig. 5), but not for the creative-innovation and socioemotional tasks (Extended Data Fig. 5-1). Specifically, the virtual condition was characterized by higher rates of dIBC state 1 (t(70) = −7.796, p < 0.001, mean difference = −0.275, 95% CI, −0.346 to −0.205), and lower rates of dIBC state 2 (t(60) = 5.037, p < 0.001, mean difference = 0.086, 95% CI, 0.052–0.120) and dIBC state 3 (t(70) = 4.309, p < 0.001, mean difference = 0.033, 95% CI, 0.077–0.211).
Figure 5-1
Overview of the dIBC state results for the creative-innovation and socioemotional tasks. a, b, For each task, we identified four distinct dIBC states that dyads exhibited and transitioned between during the social interaction. Each dIBC state is characterized by a heatmap of IBC values across a 16 × 16 ROI pair matrix. Because dIBC states were derived through an averaging approach across all dyads, dIBC states are interbrain states that dyads from both conditions have in common. The dIBC state analyses assessed whether there were differences in the occurrence rate of dIBC states. The 16 × 16 ROI pair matrix included the following ROI positions across the x-axes and y-axes: 1, anterior frontopolar area; 2, left lateral frontopolar area; 3, left dorsal frontopolar area; 4, right dorsal frontopolar area; 5, right lateral frontopolar area; 6, right dorsolateral prefrontal cortex; 7, right posterior inferior frontal gyrus; 8, right supplementary motor area; 9, right pre-motor cortex; 10, right primary motor cortex; 11, right middle temporal gyrus; 12, right supramarginal gyrus; 13, right somatosensory association cortex; 14, left middle temporal gyrus; 15, left primary motor cortex; 16, left somatosensory association cortex. c, d, Occurrence rates for each dIBC state are shown across the 581 windows derived from sliding window time segmentation. r, correlation coefficient; Na, not applicable. Statistically significant differences at FDR-corrected p < 0.05 are indicated by the asterisk symbol (*). Download Figure 5-1, TIF file.
For the three dIBC states that showed significant differences, we ran correlation analyses with all observed behavior and subjective experience measures while p values resulting from FDR correction for multiple analyses (i.e., three correlation analyses per observed behavior/subjective experience measure) for the problem-solving task only. Results revealed significant associations between occurrence rate and turn-taking for dIBC state 1 (r = −0.521, p < 0.001), state 2 (r = 0.625, p < 0.001), and dIBC state 3 (r = 0.365, p = 0.003) in the problem-solving task. Last, we ran three univariate GLMs to assess whether the associations between conversational turn-taking and the occurrence rates of dIBC states 1, 2, and 3 differed by condition. We did not observe statistically significant interaction effects involving condition (p > 0.325).
Discussion
This study explored the effects of virtual interactions on behavior, subjective experience, and neural “single-brain” activity and “interbrain” coherence. We measured cortical activation in dyads during social interactions that took place either in-person or virtually using videoconferencing technology (Zoom). All dyads performed a problem-solving task, a creative-innovation task, and a socio-emotional task. We coded cooperative behaviors from audio recordings (blind to condition) and assessed subjective experience using questionnaire measures. Virtual interactions were characterized by reduced conversational turn-taking and altered patterns of averaged IBC and dIBC. Further, patterns of IBC that were more prevalent in the virtual condition (i.e., dIBC state 1) were negatively associated with conversational turn-taking. These findings suggest that virtual interactions may promote patterns of IBC that constrain conversational behavior related to positive social interaction.
We found that conversational turn-taking was reduced in the virtual condition across all tasks. It is worth noting that activation in specific right prefrontal regions, which was greater in the virtual condition, was linked to reduced turn-taking. Given that lower levels of turn-taking were associated with less positive subjective and behavioral aspects of social interaction (i.e., lower subjective cooperation and lower subjective and observed performance), decreased turn-taking likely reflects an adverse effect of virtual interaction. Our findings align with previous research that demonstrated adverse effects of virtual interactions on conversational behavior, such as (millisecond) delays in conversational turn initiation (Boland et al., 2022). We did not, however, observe differences in performance and subjective experience measures between the conditions for any of the three tasks. These results are similar to those of prior studies that found no differences in perceived effectiveness and performance in association with virtual interaction (Brinsley et al., 2021; Camilleri and Camilleri, 2022; Chapman et al., 2021; Hillmer et al., 2021; Nguyen et al., 2022). Thus, our findings that videoconferencing affects social interaction at the behavioral level, but not the subjective or performance level, may help to clarify some of the inconsistent findings in the literature. Given that two studies (Boland et al., 2022; this study) identified differences in speech behavior, future research that includes advanced speech analyses might be an important next step.
We found that virtual interaction affected averaged IBC. The socio-emotional task elicited higher averaged IBC in the in-person condition in the right dFPA–right dFPA ROI pair, which was positively associated with the number of conversational turns taken. Given that conversational turn-taking was linked to other metrics of positive interaction, our findings suggest that discussing socio-emotional content in-person elicits averaged IBC that may be more conducive to prosocial interaction. These findings are consistent with prior studies that have found increased IBC, particularly in right prefrontal ROI pairs, to be positively associated with prosocial behavior (Cui et al., 2012; Liu et al., 2016; Pan et al., 2017; Dai et al., 2018; Xue et al., 2018; Lu et al., 2019; Miller et al., 2019). In contrast, the problem-solving and creative-innovation tasks elicited higher averaged IBC in the virtual condition in the left dFPA–left dFPA and left dFPA–right dFPA ROI pairs, and these patterns were associated with reduced conversational turn-taking. Increased averaged IBC in the left dFPA has been identified during cooperative tasks that require increased dyadic coordination (Cheng et al., 2019; Hu et al., 2017). Thus, one interpretation of our findings is that virtual interaction may negatively impact interaction in contexts that are characterized by higher levels of conversational turn-taking.
We also found evidence that virtual interaction affected the dynamic nature of interbrain coherence. Specifically, the virtual and in-person conditions were characterized by higher occurrence rates (i.e., greater frequency) of different kinds of dIBC states in the problem-solving task. dIBC state 1 was more frequent in the virtual condition, whereas dIBC states 2 and 3 were more frequent in the in-person condition. Dyads who spent more time engaged in dIBC state 1 demonstrated less conversational turn-taking (i.e., less behavioral cooperation), whereas dyads who spent more time engaged in dIBC states 2 and 3 demonstrated more conversational turn-taking. Notably, dIBC state 2 was characterized by increased IBC in the right FPA–right FPA ROI pair (Fig. 5a, gray circle). These findings, combined with the averaged IBC findings, point to the possibility that IBC in the right FPA plays an important role in prosocial interaction. In contrast, dIBC state 1 was characterized by lower IBC across all ROI pairs compared with dIBC states 2 and 3. Interestingly, prior work suggests that noncooperative tasks elicit decreased IBC across brain regions compared with cooperative tasks (Miller et al., 2019). Thus, one interpretation of the current findings is that virtual interactions constrain dyadic coherence across brain regions, and that this has negative implications for cooperation. It is worth highlighting that we found conditional differences using the dIBC measure for the problem-solving task only. The problem-solving task was characterized by greater conversational turn-taking than the other two tasks (see supplementary material). dIBC may therefore provide a meaningful lens when considering tasks that elicit more back and forth discussion between partners. Future studies should consider multiple task contexts and multiple IBC analytical approaches to further test this hypothesis and to consider more fine-grained temporal analyses. dIBC approaches are still relatively novel, and it is difficult to effectively describe the distinct functional connections that drive or distinguish different interbrain states. It is our hope that the further use of dIBC analyses will allow researchers to distill and establish distinct functional interbrain connections with a similar level of detail to single-brain functional networks (e.g., salience network).
We note five limitations of this study. First, because of regulations related to COVID-19, participants in the in-person condition wore face masks during the experiment. Thus, participants in the virtual condition also wore face masks to avoid confounding the effect of videoconferencing technology versus in-person interaction. Although we used clear, antifog face masks, they still might have obstructed some facial features and impacted our results. Future research should aim to replicate our findings without using face masks. Second, we positioned the laptops to approximately match the facial proportions of the in-person condition. Research suggests that the stoop head position and related muscle fatigue in the neck and shoulder area could alter levels of oxygenated/deoxygenated hemoglobin (Lin et al., 2020; Oka and Asgher, 2021). Future research should investigate the effect of head position on neural and interbrain activity, especially when testing for Zoom fatigue in longer experiments. Third, we used a between-subject design with respect to condition. Exposing the same participants to both virtual and in-person conditions, and examining within-person differences across conditions, would be a strong replication and extension of our findings. Fourth, given our sample size, the current findings should be considered preliminary. As a related point, a larger sample size would have allowed testing of additional differences between conditions for some of the measures (e.g., observed performance and subjective experience measures) as well as effects related to sex and race/ethnicity. Future research should systematically investigate the effect of sex and race/ethnicity composition of a dyad on behavior (i.e., conversation turn-taking; Coates, 2015; Martin and Craig, 1983), cortical activation (Leon-Carrion et al., 2006; Gaillard et al., 2021), and interbrain coherence (Baker et al., 2016; Kruse et al., 2021), in both in-person and virtual interactions. Last, we explored the effects of virtual interactions on neural activation and interbrain coherence across the (fNIRS-detectable) cortex. Achieving a high signal-to-noise ratio in all measurement channels across the high-density/whole-brain configuration is difficult. Although an SCI of 74.13% suggested an overall good signal-to-noise ratio, occipital channels had overall poorer signal-to-noise ratio and were thus excluded from our interbrain coherence analyses. Future studies should explore differences between virtual and in-person interactions in ROI pairs that include occipital regions if possible. Using short-channel technology will be an effective method for securing state-of-the-art signal-to-noise quality (Santosa et al., 2020; Zhou et al., 2020). Last, it is important to point out that we did not have a resting task to compare with our joint-cooperative tasks. There are growing concerns about whether some IBC findings are because of task-specific factors (e.g., neural entrainment of two individuals performing the same task; Holroyd, 2022). We used multiple tasks that we believe had similar demands and elicited similar levels of engagement. Nonetheless, it will be important for future research to include assessments at rest, and to improve task design and analyses to increase confidence that increased IBC is not a function of chance.
In conclusion, the current study provides evidence that virtual interaction was associated with decreased conversational turn-taking. Given that conversational turn-taking was associated with other metrics of positive social interaction (e.g., subjective cooperation and task performance), this measure may be an indicator of prosocial interaction. Both averaged and dynamic interbrain coherence in predominantly prefrontal areas were associated with conversational turn-taking. Our findings show that adverse effects of videoconferencing are linked to single-brain activity and interbrain coherence in prefrontal areas. Our study serves as a call to action for more research on the effects of videoconferencing on social interactions. This research may advance our understanding and improve our use of a technology that is increasingly integral to daily living.
Footnotes
This work was supported by a Hasso Plattner Design Thinking Research Program grant and a gift from the Kelvin Foundation.
The authors declare no competing financial interests.
- Correspondence should be addressed to Stephanie Balters at balters{at}stanford.edu