Forced-alignment of the sung acoustic signal using deep neural nets
Sung speech shows significant acoustic differences from normal speech, both careful and spontaneous speech. To analyse and better understand why sung speech presents a unique challenge for tools such as forced aligners and automatic transcribers, we trained a deep neural network to extract phone-level information from a sung acoustic signal. The current best network takes as input raw audio from a singer and outputs time-aligned phoneme labels that predict the phoneme that the singer is producing at ten millisecond increments. We use audio data from the Folkways collection, as maintained by the University of Alberta Sound Studies Institute. The data consists of several folk songs, mostly sung acapella by a few individual singers. Before being used as training or testing data, each song was aligned by hand, sectioning off each individual phoneme that appears and setting the start and endpoint. The data is also cut into twenty-five millisecond frames spaced ten milliseconds apart. Each will receive a label from the network, which will be compared with the label given by the transcription in order to evaluate the network’s performance. To further increase the amount of training data, all of the data was duplicated and noise was added to them. The performance of the network is evaluated automatically during training by comparing the output label that the network chose for a given frame to the label assigned to that frame by the human transcriber. After all of the frames have been evaluated, the network is assigned an accuracy score that reflects how many labels it assigned correctly. By this method, we found that the acoustic differences between speech and sung speech are significantly different enough that the tasks require separate acoustic models. However, using training data from both genres increased the accuracy of the overall model.
How to Cite
Copyright on articles is held by the author(s). The corresponding author has the right to grant on behalf of all authors and does grant on behalf of all authors, a worldwide exclusive licence (or non-exclusive license for government employees) to the Publishers and its licensees in perpetuity, in all forms, formats and media (whether known now or created in the future)
i) to publish, reproduce, distribute, display and store the Contribution;
ii) to translate the Contribution into other languages, create adaptations, reprints, include within collections and create summaries, extracts and/or, abstracts of the Contribution;
iii) to exploit all subsidiary rights in the Contribution,
iv) to provide the inclusion of electronic links from the Contribution to third party material where-ever it may be located;
v) to licence any third party to do any or all of the above.