The 3rd Virtual Conference on Computational Audiology was hosted online by Hearing4All from the University of Oldenburg and Hannover Medical School. With five keynote talks, four special sessions and over 500 registered participants, this yearly event keeps growing.
To accommodate different time zones, the event took place on Thursday afternoon/evening and Friday morning Central European summer time. After a word of welcome by the organisers, the scientific part started with two keynote talks. Stefan Launer talked about the transformation of hearing aids by health agents that can monitor vital signs. Elle O’Brien discussed what hearing research could learn from the way other disciplines share data in the context of machine learning.
There were also two special sessions about ‘Predictive Coding’ and ‘Remote Audiology’. The parallel sessions were themed ‘mHealth and remote testing,’ ‘Signal processing for hearing devices,’ ‘Measurement tools and hearing device fitting,’ and ‘Inspirations from physiology and models’. The day ended with a featured talk by Roger Miller about best practices for data sharing. In between the talks, conference participants could (video) chat with each other on the online platform.
Friday started with two keynote talks. Lorenzo Picinali gave an overview of achievements with virtual reality (VR) in hearing research and summarised future challenges with VR that were pointed out by conference participants in a questionnaire. Bernd Meyer talked about potential applications for deep machine learning in audiology. He showed speech perception models that use deep learning, and automated speech recognition models that may enable automated speech audiometry in the future.
The two special sessions were about ‘Machine learning challenges to improve hearing devices’ and ‘Virtual reality for hearing research and auditory modeling in realistic environments’. The parallel sessions were themed ‘Perception in children and adults,’ ‘Complex environments,’ ‘Objective measures,’ and ‘Supporting tools for audiology and rehabilitation’. In the final keynote talk, Giovanni di Liberto discussed prediction mechanisms of the brain in the context of music perception and described his research framework to assess the neural tracking of sounds.
VCCA 2022 featured many talks from early-career researchers and every special session had an early-career researcher as co-chair. Moreover, there were five awards for early-career researchers. The two video-pitch awards went to Mark Saddler and Nancy Sotero. Lakshay Khurana and Iordanis Thoidis were given a young scientist award. Finally, Giulia Angonese received the special award for her interdisciplinary work on ‘Psychological profiling in a virtual hearing clinic’.
Future meeting details: The 4th Virtual Conference on Computational Audiology will be hosted online in June/July 2023 by Jessica Monaghan (National Acoustic Laboratories), Karina de Sousa (University of Pretoria), and Fan-Gang Zeng (University of California, Irvine).
Maartje Hendrikse, Audiology, Erasmus MC, Rotterdam, Netherlands.