PhD, Principal Research Fellow in Hearing & Deafness,
University of Cambridge, UK.
Reader in Audio Experience Design,
Imperial College London, UK.
ENT & Audiology News Jan/Feb 2022 highlights advances in computational audiology that are driving the field forwards. We are not referring to digital audiology (computerised audiology techniques) but rather to developments that move the field away from the concept of a sole-use device, towards a system with fluidity between software and hardware.
In order to personalise settings and functions, such systems can be dynamically and automatically updated based on feedback and other data input. In an interview, Alastair Cruickshank provides a patient perspective on the resources that empowered him to personalise his hearing devices.
The field is evolving at a phenomenal speed, with innovative technologies and capabilities continuously emerging. The 2021 Virtual Conference on Computational Audiology (VCCA), hosted by Tobias Goehring (Cambridge, UK) and Jan-Willem Wasmann (Nijmegen, the Netherlands), demonstrated a multitude of advances.
The COVID-19 pandemic has facilitated progression due to necessity, particularly for remote care. Hearing healthcare practitioners have learnt new skills to treat patients in their homes, creating technological challenges for both. This computational audiology revolution requires active engagement of clinicians and patients for success. Helen Cullington writes about clinician experiences of adapting to the changing computational audiology landscape.
Rapid developments in virtual and augmented reality (VR/AR) have meant that quality implementations are available on cost-effective consumer units. There are many applications including computational audiology approaches for training or for realistic and controllable hearing assessments. Evidence of this trend can be found in the acquisition of Sennheiser Consumer Electronics, with its immersive audio hardware and software products, by SONOVA. The sensing capabilities of AR/VR headsets, including inertial units and eye trackers, together with advancements in machine learning, internet of things and spatial signal processing, cause a disruptive change in these technologies to improve life quality for people with hearing difficulties, well beyond traditional hearing devices. François Patou provides an overview of advances in VR/AR and related applications.
Appropriate resources can be tailored to meet individual needs, thus maximising benefit and improving uptake of hearing devices. This is mediated by enabling patients to take control, both to improve hearing experiences and to monitor other health conditions. Colver Ken Howe Ne and colleagues discuss how hearables can be adapted for multiple health monitoring purposes.
Advances in artificial intelligence (AI) and the ability of devices to collect large amounts of data has meant that AI algorithms can be more rapidly and effectively trained, creating intelligent hearing devices, as presented by Brian Moore (Cambridge, UK) and Josef Schittenlacher (Manchester, UK) at the VCCA. AI can inform streamlining audiological pathways and predict red flags that inform intervention. Jessica Monaghan provides an overview of AI advances and highlights an approach for analysing spoken utterances to detect hearing loss.
Let’s imagine a future where hearing can be assisted by earbuds, hearables and/or VR/AR headsets, connected to smartphones/watches, wireless loudspeakers and voice-activated applications; where these send information within the individual’s hearing range, and continuously adapt according to environment and personal needs; where spatial attributes of soundscapes can be enhanced to hear a person speaking across the table, or to focus on sounds of nature. And let’s imagine that information captured can support the clinician who, with the help of AI, can check, advise and empower hearing device users to take control of their own hearing experiences.