Modelling auditory scene analysis: a representational approach

Auteurs-es

  • G.J. Brown Dept. of Comput. Sci., Sheffield Univ., UK
  • M.P. Cooke Dept. of Comput. Sci., Sheffield Univ., UK

Mots-clés :

hearing, speech recognition, speech technology research, arbitrary noise source, human auditory processing, auditory scene analysis, automatic music transcription

Résumé

Speech is normally heard in the presence of other interfering sounds, a fact which has plagued speech technology research. A technique for segregating speech from an arbitrary noise source is described. The approach is based on a model of human auditory processing. The auditory system has an extraordinary ability to group together acoustic components that belong to the same sound source, a phenomenon named auditory scene analysis by Bregman (1989). Models of auditory scene analysis could provide a robust front-end for speech recognition in noisy environments, and may also have applications in automatic music transcription. Additionally, the authors hope that models of this type will contribute to the understanding of hearing and hearing impairment

Fichiers supplémentaires

Publié-e

1992-09-01

Comment citer

1.
Brown G, Cooke M. Modelling auditory scene analysis: a representational approach. Canadian Acoustics [Internet]. 1 sept. 1992 [cité 9 mai 2026];20(3):5-6. Disponible à: https://jcaa.caa-aca.ca/index.php/jcaa/article/view/711

Numéro

Rubrique

Actes du congrès de la Semaine canadienne d'acoustique