Translate

Doctors Create a Wireless Brain-Machine Interface for Real-Time Speech Synthesis-Turns Thoughts into Spoken Words for Paralyzed People




Citation and image credit: Guenther FH, Brumberg JS, Wright EJ, Nieto-Castanon A, Tourville JA, et al. (2009) A Wireless Brain-Machine Interface for Real-Time Speech Synthesis. PLoS ONE 4(12): e8218. doi:10.1371/journal.pone.0008218


A team of scientists and doctors have created brain-machine interfaces (BMIs) to restore speech to paralyzed people. BMIs involving electrodes implanted into the human cerebral cortex in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study researchers use a novel approach to speech restoration in which they decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech. 

The device could overcome on of the most debilitating aspect of profound paralysis due to accident, stroke, or disease which is loss of the ability to speak. The loss of speech not only makes the communication of needs to caregivers very difficult, but it also leads to profound social isolation of the affected individuals.   The research supports the feasibility of neural prostheses that could decode and synthesize intended words in real-time, i.e. as the (mute) speaker attempts to speak them.

Neural signals recorded by a Neurotrophic Electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome, characterized by near-total paralysis with spared cognition, were transmitted wirelessly across the scalp and used to drive a speech synthesizer. A Kalman filter-based decoder translated the neural signals generated during attempted speech into continuous parameters for controlling a synthesizer that provided immediate (within 50 ms) auditory feedback of the decoded sound. Accuracy of the volunteer's vowel productions with the synthesizer improved quickly with practice, with a 25% improvement in average hit rate (from 45% to 70%) and 46% decrease in average endpoint error from the first to the last block of a three-vowel task.

Their results support the feasibility of neural prostheses that may have the potential to provide near-conversational synthetic speech output for individuals with severely impaired speech motor control. They also provide an initial glimpse into the functional properties of neurons in speech motor cortical areas.

The research team consisted of Frank H. Guenther1,2*, Jonathan S. Brumberg1,3, E. Joseph Wright3, Alfonso Nieto-Castanon4, Jason A. Tourville1, Mikhail Panko1, Robert Law1, Steven A. Siebert3, Jess L. Bartels3, Dinal S. Andreasen3,5, Princewill Ehirim6, Hui Mao7, Philip R. Kennedy3

1 Department of Cognitive and Neural Systems and Sargent College of Health and Rehabilitation Sciences, Boston University, Boston, Massachusetts, United States of America, 2 Division of Health Sciences and Technology, Harvard University-Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America, 3 Neural Signals Inc., Duluth, Georgia, United States of America, 4 StatsANC LLC, Buenos Aires, Argentina, 5 Georgia Tech Research Institute, Marietta, Georgia, United States of America, 6 Gwinnett Medical Center, Lawrenceville, Georgia, United States of America, 7 Emory Center for Systems Imaging, Emory University Hospital, Atlanta, Georgia, United States of America



The complete article may be found here: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0008218

Related Posts Plugin for WordPress, Blogger...