Multiresolution spectrotemporal analysis of complex sounds

J Acoust Soc Am. 2005 Aug;118(2):887-906. doi: 10.1121/1.1945807.

Abstract

A computational model of auditory analysis is described that is inspired by psychoacoustical and neurophysiological findings in early and central stages of the auditory system. The model provides a unified multiresolution representation of the spectral and temporal features likely critical in the perception of sound. Simplified, more specifically tailored versions of this model have already been validated by successful application in the assessment of speech intelligibility [Elhilali et al., Speech Commun. 41(2-3), 331-348 (2003); Chi et al., J. Acoust. Soc. Am. 106, 2719-2732 (1999)] and in explaining the perception of monaural phase sensitivity [R. Carlyon and S. Shamma, J. Acoust. Soc. Am. 114, 333-348 (2003)]. Here we provide a more complete mathematical formulation of the model, illustrating how complex signals are transformed through various stages of the model, and relating it to comparable existing models of auditory processing. Furthermore, we outline several reconstruction algorithms to resynthesize the sound from the model output so as to evaluate the fidelity of the representation and contribution of different features and cues to the sound percept.

MeSH terms

  • Acoustic Stimulation
  • Algorithms
  • Cochlea / physiology*
  • Computer Simulation
  • Humans
  • Models, Biological*
  • Noise
  • Pitch Perception / physiology*
  • Psychoacoustics