Sound-Stream Ii: Towards Real-Time Gesture-Controlled Articulatory Sound Synthesis


  • Pramit Saha University of British Columbia
  • Debasish Ray Mohapatra University of British Columbia
  • Praneeth SV University of British Columbia
  • Sidney Fels University of British Columbia


Articulatory speech synthesizer, voice controller, intrinsic and extrinsic tongue muscles, force-based controller, gesture-to-muscle activation, Sound Stream


We present an interface involving four degrees-of-freedom (DOF) mechanical control of a two dimensional, mid-sagittal tongue through a biomechanical toolkit called ArtiSynth and a sound synthesis engine called JASS towards artic-ulatory sound synthesis. As a demonstration of the project, the user will learn to produce a range of JASS vocal sounds, by varying the shape and position of the ArtiSynth tongue in 2D space through a set of four force-based sensors. In otherwords, the user will be able to physically play around with these four sensors, thereby virtually controlling the magnitude of four selected muscle excitations of the tongue to vary articulatory structure. This variation is computed in terms of ‘Area Functions’ in ArtiSynth environment and communicated to the JASS based audio-synthesizer coupled with two-mass glottal excitation model to complete this end-to-end gesture-to-sound mapping.

Author Biography

Pramit Saha, University of British Columbia

Research Assistant, Human Communication Technologies Lab,

Department of Electrical and Computer Engineering

Additional Files



How to Cite

Saha P, Mohapatra DR, SV P, Fels S. Sound-Stream Ii: Towards Real-Time Gesture-Controlled Articulatory Sound Synthesis. Canadian Acoustics [Internet]. 2019 Feb. 21 [cited 2024 May 23];46(4):58-9. Available from:



Proceedings of the Acoustics Week in Canada