Perceiving Prosodic Prominence Via Unnatural Visual Information in Avatar Communication

Keywords: Avatars, prosody, multimodal perception, prominence

Abstract

Listeners integrate information from simulated faces in multimodal perception [Cohen, & Massaro 1990, Behav. Res. Meth. Instr. Comp. 22(2), 260–263], but not always in the same way as real faces [Keough et al. 2017, Can. Acoust. 45(3):176–177]. This is increasingly relevant with the dramatic increase in avatar communication in virtual spaces [https://www.bloomberg.com/professional/blog/computings-next-big-thing-virtual-world-may-reality-2020/]. Prosody is especially relevant, because compared to segmental speech sounds, the visual factors indicating prosodic prominence (e.g., eyebrow raises and hand gestures) frequently bear no biomechanical relation to the production of acoustic features of prominence, but are nonetheless highly reliable [Krahmer & Swerts 2007, JML 57(3): 396–414], and avatar virtual communication systems may convey prosodic information through unnatural means, e.g., by expressing amplitude via oral aperture (louder sound = larger opening); the present study examines whether this unnatural but reliable indicator of speech amplitude is integrated in prominence perception. We report an experiment describing whether and how perceivers take into account this reliable but unnatural visual information in the detection of prosodic prominence.
Published
2019-07-03
How to Cite
1.
Taylor RC, Prica D, Wong EYT, Keough M, Gick B. Perceiving Prosodic Prominence Via Unnatural Visual Information in Avatar Communication. Canadian Acoustics [Internet]. 2019Jul.3 [cited 2019Aug.25];47(2):67-2. Available from: https://jcaa.caa-aca.ca/index.php/jcaa/article/view/3285
Section
Proceedings of the Acoustics Week in Canada

Most read articles by the same author(s)

1 2 > >>