Share This

 

Hearing aid + accessory + smartphone app = a ‘synching’ feeling? Marshall Chasin explains why patients might be losing the rhythm.

 

The historical literature (at least going back to some of the classic texts in the 1960s) is full of recommendations to improve the environment to optimise lip-reading cues for hard of hearing people. Of course, back then, one needed to be relatively close to obtain these visual cues.

A few lines of algebra will show that for roughly every 1/3 metre (or 1.1 foot) of distance, there will be a one millisecond (msec) delay between visual and auditory cues. At four or five metres – a realistic distance for the upper limit of lip reading – there will be 12–15 msec of delay, and this is quite reasonable. We are relatively immune to such short delays and a 12–15 msec mismatch between the facial cues and the perception of the sound poses no real issue. For distances farther than five or more metres, lip reading can be problematic and other means of speech transmission need to step up to the plate. In other words, lip reading and facial cues are self-limiting. By the time a speaker is far enough away, despite some potential time delay, lip and facial cues are of no real significance.

Historically, there were not many options short of hearing aids which, in the 1960s, 1970s and for much of the 1980s, used rudimentary linear peak clipping type A amplifiers. Assistive listening devices such as FM systems, infrared systems and inductive loop systems were introduced in clinical practice to improve communication. However, for larger distances between the speaker and listener, visual and facial cues were not effective, or, in other words, visual cues were not considered a ‘distraction’.

Visual cues as a distraction

But this ‘distraction’ is now rearing its ugly head once again with modern hearing aid algorithms such as noise reduction, the use of some forms of AI, some smartphone apps, and the use of some accessories (Bluetooth or otherwise) where the digital delay can be on the order of 50–80 msec, and group digital delays can be far in excess of 100 msec. These algorithms and accessories can be used even if the hard-of-hearing listener is only one to two metres away. In scenarios such as this, lip reading and visual cues can be significantly out of ‘synch’ with each other, leading to a ‘distraction’ and a potential degradation of communication. And to further complicate things, depending on the hearing aid technology used – bank of detection filters or FFT – many of the sources have differing delays as a function of frequency. It is almost as if a hard-of-hearing person may need to close their eyes when these algorithms, accessories and smartphone apps are being used. Table 1 shows some expected digital delays for several devices, as reported by the manufacturer.

 

 

Speech overlap – increasing audio delay

The above video shows an increasing series of delays in 10-second increments with respect to non-delayed speech, and lip reading and other facial cues. It can actually be worse than this video demonstrates because the video uses the same phrase for all levels of delay such that by the time one reaches 50 or 60 msec delay, the phrase is so well memorised that one can ‘almost predict’ the correct visual cues.

"It is almost as if a hard-of-hearing person may need to close their eyes when these algorithms, accessories and smartphone apps are being used."

Music can be even more problematic than speech when it comes to delay, digital or otherwise. In many live performances of percussion-heavy pieces – such as cymbals crashing, extraneous sounds like cannon blasts in Tchaikovsky’s 1812 Overture, Op. 49, or the drum corps at the rear of a marching band – the percussion must be synchronised with the rest of the music. If not done correctly, this can pose significant challenges. The orchestra needs to time the cannon blasts very precisely (assuming that the cannon blasts are created from a safe and distant location). This is not difficult and really only takes several lines of high school algebra, but can be an issue for someone who is situated in the audience where transduced music is perceived both through an assistive listening system and through an unassisted or unamplified auditory route.

This difference can be quite problematic. Imagine using an assistive listening device that has a substantial degree of digital delay in these circumstances, coupled with hearing aid algorithms that themselves create significant digital delay (e.g. noise reduction). The music would sound slightly off-beat at best and ‘slushy’ at worst.

Recommendations

While it is true that hearing aid algorithms, accessories and other technologies with shorter delays are gradually emerging, what should we tell our hard-of-hearing clients in the meantime? My three clinical suggestions (at the current time) are:

  1. Try an experiment in which they close their eyes when up close to a speech source while some of these app-based (or even some AI-based) algorithms are being used, to see if this improves communication.
  2. Consider having a hearing aid program that can be used for music and speech where most of the advanced features (such as noise reduction) are disabled.
  3. Try to attend performance halls that use inductive / loop-based systems where there is no additional digital delay created by the inductive transmission.

These are not ideal for our clients but, with the current state of affairs, these clinical suggestions may improve things somewhat, especially for some types of music.

Share This
CONTRIBUTOR
Marshall Chasin

AuD, Director of Audiology and Research, Musicians’ Clinics of Canada; Adjunct Professor, University of Toronto (in Linguistics); Associate Professor, School of Communication Disorders and Sciences, Western University; Recipient of Queen Elizabeth ll Silver Diamond Jubilee Award. Recipient of the CANADA 150 Medal; Editor in Chief, Canadian Audiologist (www.CanadianAudiologist.ca).

View Full Profile