Share This

In 2017, Yaqing Su, a summer intern with Starkey Hearing Technologies studied the perception of spoken consonants by hearing impaired listeners at the Acoustical Society of America. Su’s research was selected for a special session titled “Acoustics Outreach: Planting Seeds for Future Clinical and Physiological Collaborations ‘18” in the spring 2018 meeting of the Acoustical Society of America. We caught up with Su’s mentor Jayaganesh Swaminathan to talk about the study and its importance.

Background on the issue of perception

In order to perceive meaningful speech, the auditory system must recognize different consonants amidst a noisy and variable acoustic signal. Difficulties in perception of consonants have often been associated with speech perception challenges for people with hearing loss. For example, imagine a retired couple conversing over their favorite television program. The wife tells her husband, “I think you need a hearing test.” The husband responds angrily, “Why the heck do I need a hairy chest?” This scenario demonstrates the classic problem people with hearing loss face: mishearing a few consonants in noise may lead to a completely different interpretation of the original message.

Part of the challenge in perceiving consonants correctly arises from the impoverished representation of consonants in the auditory pathway (also referred to as neural mechanisms) due to hearing loss.

A quick overview of the study

The overall goals of this study were to use surface level electro-encephalography (EEG) recordings to get a better understanding of the neural mechanisms that contribute to the difficulties in consonant perception in hearing impaired listeners.

Both perceptual and EEG responses were measured for a set of nonsense vowel-consonant-vowel (VCV) speech in listeners with moderate hearing loss. Responses were measured in unamplified and amplified conditions. A state-of-the-art machine-learning classifier was trained to discriminate the EEG signal evoked by each consonant. The performance of the classifier was compared to the listeners’ psychophysical performance.

For all the listeners, performance in the perceptual discrimination task was poor, even with sound amplification. [Referring to “sound amplification”: Not with hearing aids but just with linear gain amplification. The amplification was provided to the speech stimuli based on a listeners’ audiogram and the amplified stimuli were presented over headphones. The linear amplification “mimics” the sound amplification provided by hearing aids.] EEG waveforms from the listeners also showed different patterns of responses for each consonant. A simple classifier could decode consonants from the EEG signal. For certain consonant categories, a relationship between the neural and perceptual representation of consonants could be established in the listeners. The results from this study have implications for technologies aiming to use neural signals to guide hearing aid processing.

Talking with Starkey Research Scientist Jayaganesh Swaminathan about the value of this study

Starkey Hearing Technologies: This study is incredible. In your opinion, what specifically influenced ASA to honor Su with an award for it?
Jayaganesh Swaminathan: This is really a beautiful study as it had a physiology component, a psychophysical component and a quantitative component in addressing speech perception difficulties in hearing impaired listeners. Su’s project at Starkey is highly translational in nature and lays a solid scientific foundation to develop improved hearing aid fitting and speech enhancement strategies to improve the lives of people with hearing loss.

SHT: Why was this study so important to conduct?
JS: There has been a recent surge in interest in developing hearing aids that incorporate EEG signals as an input to guide signal processing to facilitate robust speech perception in the hearing impaired (HI). However, the vast majority of research on this topic has been conducted in listeners with normal hearing and there is a paucity of EEG data from HI listeners. The results from Yaqing’s study is critical to advance both basic and applied research aimed at using EEG signatures to better understand the effects of hearing impairment on neural coding of speech and to design better hearing aid technologies.

SHT: What is the value of the data from this study for today’s hearing aid technologies?
JS: First and foremost, the data from this study establishes the feasibility of recording meaningful EEG signals in response to consonants from HI subjects that are relatable to their perceptual responses. This is encouraging for research efforts aimed at developing speech enhancement strategies in hearing aids based on signals measured from the brain. I do not think we are (technologically) ready to right away transfer the knowledge from Yaqing’s study to improve current hearing aid technologies yet. For example, we still do not have the technology to reliably record EEG signals from a hearing aid.

SHT: What are the implications of this study for future efforts to improve speech enhancement?
JS: In my opinion, this is the first step in a long chain of events which may one day lead to the development of hearing aid technologies that can be modulated based on neural responses.

SHT: What do you feel is the next step along this research pathway?
JS: Here are a few immediate research steps to follow through, although this list is not exhaustive:

  1. Replication with a larger HI population
  2. Measure EEG responses to consonants with more complex speech sentences across different listening environments
  3. Study the effects of some of the hearing aid signal processing components (such as compression, noise reduction, etc.) on the fidelity of the neural signals measured in response to speech

FURTHER INFORMATION

W: www.starkey.com
Facebook: www.facebook.com/starkeyhearing
Twitter: twitter.com/starkeyhearing

Share This