Perceptual separation of concurrent speech sounds: absence of across-frequency grouping by common interaural delay

J Acoust Soc Am. 1995 Aug;98(2 Pt 1):785-97. doi: 10.1121/1.413571.

Abstract

Three experiments and a computational model explored the role of within-channel and across-channel processes in the perceptual separation of competing, complex, broadband sounds which differed in their interaural phase spectra. In each experiment, two competing vowels, whose first and second formants were represented by two discrete bands of noise, were presented concurrently, for identification. Experiments 1 and 2 showed that listeners were able to identify the vowels accurately when each was presented to a different ear, but were unable to identify the vowels when they were presented with different interaural time delays (ITDs); i.e. listeners could not group the noisebands in different frequency regions with the same ITD and thereby separate them from bands in other frequency regions with a different ITD. Experiment 3 demonstrated that while listeners were unable to exploit a difference in interaural delay between the pairs of noisebands, listeners could identify a vowel defined by interaurally decorrelated noisebands when the other two noisebands were interaurally correlated. A computational model based upon that of Durlach [J. Acoust. Soc. Am. 32, 1075-1076 (1960)] showed that the results of these and other experiments can be interpreted in terms of a within-channel mechanism, which is sensitive to interaural decorrelation. Thus the across-frequency integration which occurs in the lateralization of complex sounds may play little role in segregating concurrent sounds.

Publication types

  • Comparative Study

MeSH terms

  • Humans
  • Noise / adverse effects
  • Perceptual Masking
  • Phonetics
  • Speech Perception*