PT - JOURNAL ARTICLE AU - Maansi Desai AU - Jade Holder AU - Cassandra Villarreal AU - Nat Clark AU - Brittany Hoang AU - Liberty S. Hamilton TI - Generalizable EEG encoding models with naturalistic audiovisual stimuli AID - 10.1523/JNEUROSCI.2891-20.2021 DP - 2021 Sep 09 TA - The Journal of Neuroscience PG - JN-RM-2891-20 4099 - http://www.jneurosci.org/content/early/2021/09/08/JNEUROSCI.2891-20.2021.short 4100 - http://www.jneurosci.org/content/early/2021/09/08/JNEUROSCI.2891-20.2021.full AB - In natural conversations, listeners must attend to what others are saying while ignoring extraneous background sounds. Recent studies have used encoding models to predict electroencephalography (EEG) responses to speech in noise-free listening situations, sometimes referred to as “speech tracking”. Researchers have analyzed how speech tracking changes with different types of background noise. It is unclear, however, whether neural responses from acoustically rich, naturalistic environments with and without background noise can be generalized to more controlled stimuli. If encoding models for acoustically rich, naturalistic stimuli are generalizable to other tasks, this could aid in data collection from populations who may not tolerate listening to more controlled and less-engaging stimuli for long periods of time. We recorded non-invasive scalp EEG while 17 human (8 male/9 female) participants listened to speech without noise and audiovisual speech stimuli containing overlapping speakers and background sounds. We fit multivariate temporal receptive field (mTRF) encoding models to predict EEG responses to pitch, the acoustic envelope, phonological features, and visual cues in both stimulus conditions. Our results suggested that neural responses to naturalistic stimuli were generalizable to more controlled data sets. EEG responses to speech in isolation were predicted accurately using phonological features alone, while responses to speech in a rich acoustic background were more accurate when including both phonological and acoustic features. Our findings suggest that naturalistic audiovisual stimuli can be used to measure receptive fields that are comparable and generalizable to more controlled audio-only stimuli.Significance Statement:Understanding spoken language in natural environments requires listeners to parse acoustic and linguistic information in the presence of other distracting stimuli. However, most studies of auditory processing rely on highly controlled stimuli with no background noise, or with background noise inserted at specific times. Here, we compare models where EEG data are predicted based on a combination of acoustic, phonetic, and visual features in highly disparate stimuli – sentences from a speech corpus, and speech embedded within movie trailers. We show that modeling neural responses to highly noisy, audiovisual movies can uncover tuning for acoustic and phonetic information that generalizes to simpler stimuli typically used in sensory neuroscience experiments.