Sound-Stream Ii: Towards Real-Time Gesture-Controlled Articulatory Sound Synthesis

Auteurs-es

  • Pramit Saha University of British Columbia
  • Debasish Ray Mohapatra University of British Columbia
  • Praneeth SV University of British Columbia
  • Sidney Fels University of British Columbia

Mots-clés :

Articulatory speech synthesizer, voice controller, intrinsic and extrinsic tongue muscles, force-based controller, gesture-to-muscle activation, Sound Stream

Résumé

We present an interface involving four degrees-of-freedom (DOF) mechanical control of a two dimensional, mid-sagittal tongue through a biomechanical toolkit called ArtiSynth and a sound synthesis engine called JASS towards artic-ulatory sound synthesis. As a demonstration of the project, the user will learn to produce a range of JASS vocal sounds, by varying the shape and position of the ArtiSynth tongue in 2D space through a set of four force-based sensors. In otherwords, the user will be able to physically play around with these four sensors, thereby virtually controlling the magnitude of four selected muscle excitations of the tongue to vary articulatory structure. This variation is computed in terms of ‘Area Functions’ in ArtiSynth environment and communicated to the JASS based audio-synthesizer coupled with two-mass glottal excitation model to complete this end-to-end gesture-to-sound mapping.

Biographie de l'auteur-e

Pramit Saha, University of British Columbia

Research Assistant, Human Communication Technologies Lab,

Department of Electrical and Computer Engineering

Fichiers supplémentaires

Publié-e

2019-02-21

Comment citer

1.
Saha P, Mohapatra DR, SV P, Fels S. Sound-Stream Ii: Towards Real-Time Gesture-Controlled Articulatory Sound Synthesis. Canadian Acoustics [Internet]. 21 févr. 2019 [cité 23 nov. 2024];46(4):58-9. Disponible à: https://jcaa.caa-aca.ca/index.php/jcaa/article/view/3248

Numéro

Rubrique

Actes du congrès de la Semaine canadienne d'acoustique