HomeProductsProducts Details
Medical
Date: 12-12-13

Hearing aid tech with background noise suppressed using neuron n/w algorithm

The major problem with the present day hearing aid is, the user is fed up of background noise, particularly when two or more persons talking at the same time, they have difficulty in listening. The Ohio State University engineers and scientists have developed neural network-based algorithm to improve the recognition of words by 90% compared to the present hearing aids.

This technology can also embedded inside a smart phone, so that the hearing aid user don't need separate hearing aid when they already have a smart phone embedded with this technology. The present-day processors inside smart phones can do the complex signal processing, and broadcast the enhanced signal to ultra-small earpieces wirelessly.

This patent pending technology is supported by hearing aid manufacturer Starkey. The desire to understand one voice in roomful of chatter has been dubbed the “cocktail party problem.” This technology is designed to address such cocktail party problem.

“Focusing on what one person is saying and ignoring the rest is something that normal-hearing listeners are very good at, and hearing-impaired listeners are very bad at,” said Eric Healy, professor of speech and hearing science and director of Ohio State’s Speech Psychoacoustics Laboratory. “We’ve come up with a way to do the job for them, and make their limitations moot.”

“For 50 years, researchers have tried to pull out the speech from the background noise. That hasn’t worked, so we decided to try a very different approach: classify the noisy speech and retain only the parts where speech dominates the noise” said DeLiang “Leon” Wang, professor of computer science and engineering, Ohio State University.

The algorithm is unique, Wang said, because it utilizes a technique called machine learning. He and doctoral student Yuxuan Wang are training the algorithm to separate speech by exposing it to different words in the midst of background noise. They use a special type of neural network called a “deep neural network” to do the processing—so named because its learning is performed through a deep layered structure inspired by the human brain, stated in the release.

The technology is currently being commercialized and is available for license from Ohio State’s Technology Commercialization and Knowledge Transfer Office.

0 Comments
Default user