Open Access Open Access  Restricted Access Subscription or Fee Access

Perceiving Prosodic Prominence Via Unnatural Visual Information in Avatar Communication

Ryan Christopher Taylor, Dimitri Prica, Esther Y. T. Wong, Megan Keough, Bryan Gick

Abstract


Listeners integrate information from simulated faces in multimodal perception [Cohen, & Massaro 1990, Behav. Res. Meth. Instr. Comp. 22(2), 260–263], but not always in the same way as real faces [Keough et al. 2017, Can. Acoust. 45(3):176–177]. This is increasingly relevant with the dramatic increase in avatar communication in virtual spaces [https://www.bloomberg.com/professional/blog/computings-next-big-thing-virtual-world-may-reality-2020/]. Prosody is especially relevant, because compared to segmental speech sounds, the visual factors indicating prosodic prominence (e.g., eyebrow raises and hand gestures) frequently bear no biomechanical relation to the production of acoustic features of prominence, but are nonetheless highly reliable [Krahmer & Swerts 2007, JML 57(3): 396–414], and avatar virtual communication systems may convey prosodic information through unnatural means, e.g., by expressing amplitude via oral aperture (louder sound = larger opening); the present study examines whether this unnatural but reliable indicator of speech amplitude is integrated in prominence perception. We report an experiment describing whether and how perceivers take into account this reliable but unnatural visual information in the detection of prosodic prominence.

Keywords


Avatars, prosody, multimodal perception, prominence

Full Text:

PDF PDF

Refbacks

  • There are currently no refbacks.