Better hearing aids could spring from computer modelling of impaired ears at the University of Essex, which has revealed surprising behaviour inside damaged ears.
Led by Professor Ray Meddis, the researchers started with an algorithmic model of a healthy ear.
“We are experts in modelling the hearing system by reverse-engineering the human ear,” said Meddis. As the healthy ear model improved, “it became clear that we could simulate what goes wrong with hearing.”
With funding from the Engineering and Physical Sciences Research Council (EPSRC), the Essex team began detailed sound tests on hearing-impaired people.
As findings were fed into the model, said Meddis, it became good enough in many cases to predict the full span of someone’s hearing response once tests had identified which part of their ear was faulty.
It revealed a flaw in traditional hearing analysis, and a surprise which explained why so many expensive hearing aids are only worn a few times.
“Some 25% of people pay £1,000-5,000 for their hearing aid and don’t wear it,” he said. “Once we could simulate poor hearing, we could see what needed to be done with hearing aids.”
It transpired that hearing impairment does not necessarily only produce frequency-dependent loss of sensitivity. It can also affect the transfer function of the hearing system.
Healthy ears compress the dynamic range of incoming sound with a roughly logarithmic response.
“Normal hearing is compressive. The bad thing about some people’s hearing loss is that they go linear,” said Meddis. “These are the people who say ‘Don’t shout, I’m not deaf’,”
Traditional hearing tests, according to Meddis, only measure the threshold of perception at different frequencies.
A hearing aid is then programmed to selectively amplify those frequencies.
“If you have two people whose hearing is weak at the same frequency, they will be given the same hearing aid,” said Meddis.
This might be fine for one of them, but the other might have linear hearing.
“The hearing aid makes their threshold normal, but with linear hearing perceived loudness grows rapidly,” he explained. “They go to a cinema or restaurant and the level they hear is now above the threshold that many people can tolerate.”
This person sometimes needs less sound, not more.
To allow for transfer function variation on top of frequency dependence, the Essex team is chopping sound into bands and delivering it at a constant amplitude suited to the particular person – something it has dubbed ‘instantaneous compression’.
“Many hearing aids have a compressor, but it takes 20-100ms to react,” said Meddis. “This is too slow for many types of sound, but if you make the compression time shorter, you get a lot of distortion, and if you make it very short, you get clipping.”
Instantaneous compression splits the frequency spectrum into a number of narrow bands, and applies constant power compression to each band separately.
This adds a lot of distortion, which is reduced by a second bank of band filters.
Professor Ray Meddis
“Re-filtering the band gets rid of nearly all the distortion,” said Meddis. “Then you add all the filters and that is what the patient gets.”
It looks like 5-10 bands will be enough. “we need to find the fewest number of bands that will give good results. We know it is less than 20,” he said.
A side effect of all this modelling is that the researchers can predict how much useful hearing a particular person could recover by combining their algorithmic ear model with the model of their instantaneous compression hearing aid algorithms.
Talking to hearing aid manufacturers has revealed a potential problem for the system, based on personal appearance.
It requires attenuation when ambient noise is high, so the ear canal has to be blocked – easy for older in-ear aids, not so for modern hearing aids that fit behind the users ear and feed sound in through a almost-invisible slim transparent tube.
The tube rest in the canal, but does not seal it.
“You can’t give our algorithms to clinicians because hearing aids today can’t make sounds quieter,” said Meddis. “What they don’t want is a plug in their ear – which is exactly what is required if sound levels have to be reduced.”
The answer may be, he added, a plug that can be slipped over the end of the tube when the wearer enters a noisy environment.
Meddis has decided not to patent the algorithms. “We don’t mind who has this information, we just want to get it out there,” he said.
The team has tied up with Swiss firm Phonak to develop a practical hearing aid.
“They have assured us that they already have hearing aids with all of the components and enough processing power to implement our algorithms,” said Meddis.
The researchers would like to make it clear that this is research and that, even if the procedures gain clinical acceptance, no device will reach the market for at least two years.
There is a video of the technology
PC interface lets researcher tune hearing aid transfer function in real time
As an alternative to traditional hearing threshold tests, a liv e graphical user interface allows a PC-based hearing aid simulator to be adjusted for a particular person at normal hearing levels.
The algorithms within have been developed at the University of Essex to compensate for recently-discovered aural transfer function errors.
For example: parameters TA and TD are traditional – actual hearing threshold and desired threshold respectively, while TM and TC relate to a feedback loop through which the brain controls gain in the ear.
“This is a sexy topic for hearing researchers. TM and TC control our attempt to model this,” Professor Ray Meddis told Electronics Weekly. “They might not be the right controls yet.”