Share

Scientists decode brainwaves

Neuroscientists may one day be able to eavesdrop on the constant, internal monologs that run through our minds, or hear the imagined speech of a stroke or a locked-in patient with inability to speak, according to researchers at the University of California, Berkeley.

The work, conducted in the labs of Robert Knight at Berkeley and Edward Chang at UCSF, is reported in the open-access journal PLoS Biology. The report will be accompanied by an interview with the authors for the PLoS Biology Podcast.

The scientists have succeeded in decoding electrical activity in a region of the human auditory system called the superior temporal gyrus (STG). By analyzing the pattern of STG activity, they were able to reconstruct words that subjects listened to in normal conversation.

"This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak," said Knight, Professor of Psychology and Neuroscience at UC Berkeley. "If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."

A prosthetic device

"This research is based on sounds a person actually hears, but to use this for a prosthetic device, these principles would have to apply to someone who is imagining speech," cautioned first author Brian N. Pasley, a post-doctoral researcher at UC Berkeley. "There is some evidence that perception and imagery may be pretty similar in the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device."

Pasley tested two different methods to match spoken sounds to the pattern of activity in the electrodes. The patients then heard a single word and Pasley used two different computational models to predict the word based on electrode recordings. The better of the two methods was able to reproduce a sound close enough to the original word for him and fellow researchers to correctly guess the word better than chance.

"We think we would be more accurate with an hour of listening and recording and then repeating the word many times," Pasley said. But because any realistic device would need to accurately identify words the first time heard, he decided to test the models using only a single trial.

"I didn't think it could possibly work, but Brian did it," Knight said. "His computational model can reproduce the sound the patient heard and you can actually recognize the word, although not at a perfect level."

Brain exploration

The ultimate goal of the study was to explore how the human brain encodes speech and determine which aspects of speech are most important for understanding.

"At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound," Pasley said. "The big question is, what is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings."

In the accompanying Podcast, PLoS Biology Editor Ruchir Shah sits down with Brian Pasley and Robert Knight to discuss their main findings, the applications for neural prosthetics, as well as the potential ethical implications for "mind-reading".

(EurekAlert, January 2012)

Read more:

The human mind

Hearing management

We live in a world where facts and fiction get blurred
Who we choose to trust can have a profound impact on our lives. Join thousands of devoted South Africans who look to News24 to bring them news they can trust every day. As we celebrate 25 years, become a News24 subscriber as we strive to keep you informed, inspired and empowered.
Join News24 today
heading
description
username
Show Comments ()
Editorial feedback and complaints

Contact the public editor with feedback for our journalists, complaints, queries or suggestions about articles on News24.

LEARN MORE