Total Pageviews

Friday, June 24, 2011

A computer program which can read silently spoken words by analysing nerve signals in our mouths and throats, has been developed by NASA.

Preliminary results show that using button-sized sensors, which attach under the chin and on the side of the Adam's apple, it is possible to pick up and recognise nerve signals and patterns from the tongue and vocal cords that correspond to specific words.

"Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," says Chuck Jorgensen, a neuroengineer at NASA's Ames Research Center in Moffett Field, California, in charge of the research. Just the slightest movements in the voice box and tongue is all it needs to work, he says.

The sensors have already been used to do simple web searches and may one day help space-walking astronauts and people who cannot talk. The system could send commands to rovers on other planets, help injured astronauts control machines, or aid disabled people.

In everyday life, they could even be used to communicate on the sly - people could use them on crowded buses without being overheard, say the NASA scientists.

Web search
For the first test of the sensors, scientists trained the software program to recognise six words - including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors silently said the words to themselves and the software correctly picked up the signals 92 per cent of the time.

Then researchers put the letters of the alphabet into a matrix with each column and row labelled with a single-digit number. In that way, each letter was represented by a unique pair of number co-ordinates. These were used to silently spell "NASA" into a web search engine using the program.

"This proved we could browse the web without touching a keyboard," says Jorgensen.

Noisy settings
Phil Green, a computer scientist focusing on speech and hearing at the University of Sheffield, UK, called the research "interesting and novel" on hearing the news. "If you're not actually speaking but just thinking about speaking then at least some of the messages still get sent from the brain to the vocal tract," he says.

But he cautions the preliminary tests may have been successful because of the short lengths of the words and suggests the test be repeated on many different people to test the sensors work on everyone.

The initial success "doesn't mean it will scale up", he told New Scientist. "Small-vocabulary, isolated word recognition is a quite different problem than conversational speech, not just in scale but in kind."

He says conventional voice-recognition technology is more powerful than the apparent results of these sensors, and that "the obvious thing is to couple this with acoustics" to enhance communication in noisy settings.

The NASA team is now working on sensors that will detect signals through clothing.