Canadian Acoustics <p>This quarterly journal is free to individual members of the Canadian Acoustical Association (CAA) and institutional subscribers. <strong>Canadian Acoustics</strong> publishes refereed articles and news items on all aspects of acoustics and vibration. It also includes information on research, reviews, news, employment, new products, activities, discussions, etc. Papers reporting new results and applications, as well as review or tutorial papers and shorter research notes are welcomed, in English or in French.</p> en-US Copyright on articles is held by the author(s). The corresponding author has the right to grant on behalf of all authors and does grant on behalf of all authors, a worldwide exclusive licence (or non-exclusive license for government employees) to the Publishers and its licensees in perpetuity, in all forms, formats and media (whether known now or created in the future) <br />i) to publish, reproduce, distribute, display and store the Contribution;<br />ii) to translate the Contribution into other languages, create adaptations, reprints, include within collections and create summaries, extracts and/or, abstracts of the Contribution;<br />iii) to exploit all subsidiary rights in the Contribution,<br />iv) to provide the inclusion of electronic links from the Contribution to third party material where-ever it may be located;<br />v) to licence any third party to do any or all of the above.<br /><br /> (Prof. Umberto Berardi) (Cécile Le Cocq, P.Eng., Ph.D.) Wed, 03 Jul 2019 10:54:08 +0000 OJS 60 Special Issue: Audiology and Neurosciences Olivier Valentin Copyright (c) 2019 Olivier Valentin Wed, 03 Jul 2019 10:54:07 +0000 Auditory Functions of the Peripheral Hearing System and the Common Conditions Affecting Sound Conduction <p class="Abstract">The peripheral hearing system combines mechanical and electrical transmission through the different structures of the outer, middle and inner ear. The World Health Organization estimates a prevalence of 6.1% of the world population living with a bilateral disabling hearing loss. Here, the roles of the different peripheral hearing system structures are reviewed and the most common causes of hearing loss related to mechanical transmission in the human ear are presented. Some common causes of sensorineural hearing loss are also discussed. More precisely, ear canal blockage, external ear infection (when the ear canal is blocked due to severe swelling), Eustachian tube dysfunction, serous and acute otitis, tympanic membrane perforation, cholesteatoma, ossicular chain discontinuity, otosclerosis, Meniere’s disease, presbycusis and noise-induced hearing loss are briefly presented in this paper in an attempt to highlight these problems and pathologies to the Canadian acoustical community.</p> Laurence Martin, Olivier Valentin Copyright (c) 2019 Laurence Martin, Olivier Valentin Wed, 03 Jul 2019 10:54:07 +0000 Using the Auditory Brainstem Response Elicited by Within-Channel Gaps to Measure Temporal Resolution <p class="Abstract">The Auditory Brainstem Response (ABR) can be used to measure the early temporal activity of the auditory system. A gap-in-noise ABR has been developed to measure the electrophysiological response to auditory stimulation without attending to the task. In the present study, 15 young adults passively listened to stimuli of various gap widths in separate sequences. In a single sequence, two identical 15 ms filtered noise bursts, with a center frequency of either 750 or 3750 Hz, were presented separated by a gap (2, 5, 10, 20, 30, 40 or 50 ms in duration), with the second noise burst followed by an interstimulus interval of no less than 50 ms. An ABR was recorded at the onset of the first noise burst before the gap (pre-gap) and at onset of the second noise burst (i.e. at the offset of the gap, post-gap). The amplitude of wave V elicited after the gap increased as the gap duration grew larger, whereas the wave V before the gap, the control, remained relatively constant. A significant difference was found between the amplitude of wave V elicited before and after the gap for gap durations equal to and below 20 ms and 5 ms, for 750 and 3750 Hz, respectively. The gap-in-noise ABR can potentially provide frequency-specific information for the study of temporal resolution in populations with a variety of hearing disorders.</p> Victoria Duda-Milloy, Eric Zorbas, Daniel L. Benoit, Amineh Koravand Copyright (c) 2019 Victoria Duda-Milloy Wed, 03 Jul 2019 10:54:07 +0000 The Effects of Bilingualism on Speech Evoked Brainstem Responses Recorded in Quiet and in Noise <p>The main objective of the present study was to investigate the effect of sensory enrichment, such as bilingualism, on the subcortical processing in quiet and adverse listening conditions such as in the presence of noise. More specifically, the aim of this investigation was to identify some neural biomarkers at brainstem level distinguishing bilinguals from monolinguals. Forty-one 18- to 25-year-old adults participated in the study: 19 monolinguals and 22 bilinguals. Their language fluency was assessed with the Language Experience and Proficiency (LEAP) questionnaire. Auditory Brainstem Responses (ABRs) were recorded using click and speech /da/ stimuli in quiet and also in noise for the latter. No significant differences between the two groups were observed for click-evoked ABR. The speech-evoked ABR transient waves (V, C) and the periodic region (D and F) latencies were longer for the monolinguals compared to the bilingual group. The Frequency Following Responses (F0 and F1) of the speech-evoked ABR were similar for the two groups in quiet and in noise. Results suggested that monolinguals need more time to process speech stimuli than their bilingual peers. Early in the auditory system, the neural responses related to speech processing in the absence or the presence of background noise seem to be less resilient when compared to those of adults who are fluent in two languages. Bilingualism could stimulate the automatic sound processing abilities of the auditory system in a way that makes it highly efficient. Furthermore, this study demonstrated the applications of speech-ABR and its potential usefulness as a clinical biomarker. </p> Amineh Koravand, Jordon Thompson, Geneviève Chénier, Neda Kordjazi Copyright (c) 2019 Amineh Koravand Wed, 03 Jul 2019 10:54:07 +0000 The Effects of Singing Lessons on Speech Evoked Brainstem Responses in Children with Central Auditory Processing Disorders <p class="Abstract">This study investigated the effects of formal singing lessons on subcortical auditory responses in children with central auditory processing disorders (CAPD). Eleven school aged children (7-11 years old) participated in the study. Auditory brainstem responses (ABRs) were recorded using click and speech stimuli (/da/) before and after 6 months of singing lessons. The lessons included curriculum specifically designed to address deficits in pitch and timing perception as seen in children with CAPD. Results revealed delayed latencies in CAPD children before and after singing lessons compared to the normative data developed for children with normal auditory function. However, no significant latency differences were observed after the six to eight months of singing lessons. Significantly larger amplitudes were observed for Wave A and the VA slope after musical training. A trend for larger amplitude was also observed for Wave O. Enriched auditory experiences have a profound influence on how sound is processed in the brain. The data of the present study suggest that efficacy of formal singing lessons can be demonstrated by speech-ABR in children with CAPD. The magnitude of the onset and off-set of the speech-ABR response improved after the six to eight months of formal auditory (music) training. Subcortical response amplitude could be more sensitive than latencies to demonstrate the positive effect of singing lessons. However, this duration would be insufficient to reveal an improvement for the neural timing (latency).</p> Amineh Koravand, Erin Parkes, Lucius Fauve Duquette-Laplante, Caryn Bursch, Sarah Tomaszewski Copyright (c) 2019 Amineh Koravand Wed, 03 Jul 2019 10:54:07 +0000 Development of a Real-Time Eog-Based Acoustical Beamformer Algorithm for Binaural Hearing Devices <p class="Abstract">Electro-oculography (EOG) is a technique used for instance to evaluate the ocular motility by recording the potential difference between the cornea (positive potential) and the retina (negative potential) with periorbital electrodes. This paper presents the proof of concept of an acoustical beamforming algorithm using the gaze angle obtained from EOG recordings to optimize sound localization and perception. Such an algorithm would help to enhance the user experience of people using binaural hearing devices such as hearing aids or digital hearing protectors by improving, for example, speech recognition in noise.</p> Olivier Valentin, Saumya Vij, Jérémie Voix Copyright (c) 2019 Olivier Valentin, Saumya Vij, Jérémie Voix Wed, 03 Jul 2019 10:54:07 +0000 Turbulent Energy Prediction for an External Flow Around Valeo Cooling Fan by V2-F Modelling and Improved K-Ε Low Reynolds Model <p class="Abstract">The noise field can be defined as the consequence of pressure fluctuations generated by turbulent flows, close to solid walls, which are governed by acoustics conversions and basing on the Lighthill’s theory. This paper is discussing the different results of numerical simulation for an external flow around an asymmetric wing profile (Valeo CD). The Numerical simulation consists of comparing the original Durbin V<sup>2</sup>-f and the k-ε low Reynolds models. Some modifications have been introduced to the k-ε model, by replacing the strain rate term and the vorticitiy, in order to improve the turbulent energy prediction of the low Reynolds viscous models. The comparison of the results obtained has been made with full experiments in large wind tunnel at the central school of Lyon, and LES simulation. The V<sup>2</sup>-f model has shown a good stability and satisfactory turbulent energy prediction near the wall, comparing to the k-ε modified model. The improvements were due to the normal velocity fluctuations v², and the anisotropic effects modelled by the elliptic relaxation function close to the solid wall.</p> Nasreddine Akermi, azzeddine khorsi, Omar Imine Copyright (c) 2019 khorsi azzeddine Wed, 03 Jul 2019 10:54:07 +0000 Acoustic Correction of a Renaissance Period Hall <p class="Abstract"><span lang="EN-CA">Medieval and Renaissance halls are often used for musical events or conferences. These rooms have vaulted ceilings, while the surfaces are covered with plaster and marble. The acoustics of these places are not optimal for listening to musical performances or conferences. To make these environments acoustically usable, an acoustic correction must be made. A room built during the Renaissance period used for cultural events was considered as case study. From the acoustic measurements, it results that at mid-frequencies the reverberation time is about 4.5 seconds. The evaluation of the acoustic correction was carried out with a software for the architectural acoustics. The virtual model was analyzed first in the initial configuration and then with the insertion of sound-absorbing panels on the walls and under the ceiling with the vault. Subsequently, the acoustic correction was performed by installing sound-absorbing panels in the room. Acoustic measurements were taken with this new configuration, in the absence of the public, and the reverberation time at the mid-frequencies was reduced to 2.0 seconds as presented in the design project.</span></p><p class="Abstract"><span lang="EN-CA"> </span></p> Gino Iannace, Giuseppe Ciaburro, Amelia Trematerra, Corrado Foglia Copyright (c) 2019 Gino Iannace Wed, 03 Jul 2019 10:54:08 +0000 Perceiving Prosodic Prominence Via Unnatural Visual Information in Avatar Communication Listeners integrate information from simulated faces in multimodal perception [Cohen, &amp; Massaro 1990, Behav. Res. Meth. Instr. Comp. 22(2), 260–263], but not always in the same way as real faces [Keough et al. 2017, Can. Acoust. 45(3):176–177]. This is increasingly relevant with the dramatic increase in avatar communication in virtual spaces []. Prosody is especially relevant, because compared to segmental speech sounds, the visual factors indicating prosodic prominence (e.g., eyebrow raises and hand gestures) frequently bear no biomechanical relation to the production of acoustic features of prominence, but are nonetheless highly reliable [Krahmer &amp; Swerts 2007, JML 57(3): 396–414], and avatar virtual communication systems may convey prosodic information through unnatural means, e.g., by expressing amplitude via oral aperture (louder sound = larger opening); the present study examines whether this unnatural but reliable indicator of speech amplitude is integrated in prominence perception. We report an experiment describing whether and how perceivers take into account this reliable but unnatural visual information in the detection of prosodic prominence. Ryan Christopher Taylor, Dimitri Prica, Esther Y. T. Wong, Megan Keough, Bryan Gick Copyright (c) 2019 Ryan Christopher Taylor Wed, 03 Jul 2019 10:54:08 +0000 Minutes of the 2019 CAA Board of Directors Meeting Roberto Racca Copyright (c) 2019 Roberto Racca Wed, 03 Jul 2019 10:54:08 +0000 AWC 2019 Edmonton Conference Announcement and Call for Papers Benjamin V. Tucker Copyright (c) 2019 Benjamin V. Tucker Wed, 03 Jul 2019 10:54:08 +0000 2019 ICSV26 Montreal Announcement Jérémie Voix Copyright (c) 2019 Jérémie Voix Wed, 03 Jul 2019 10:54:08 +0000