新装置或助力大脑信号转化为有声语言
Scientists Create Device to Turn Brain Signals into Speech

新装置或助力大脑信号转化为有声语言

Scientists Create Device to Turn Brain Signals into Speech

00:00

新装置或助力大脑信号转化为有声语言

Scientists Create Device to Turn Brain Signals into Speech

2019-05-16

科学家表示,他们发明了一种新装置,可以将大脑信号转换为电子的有声语言。


科学家表示,他们发明了一种新装置,可以将大脑信号转换为电子的有声语言。
Scientists say they have created a new device that can turn brain signals into electronic speech.

这项发明有朝一日将可以让失去说话能力的人有一种更好的方式来交流,好于目前的方式。
The invention could one day give people who have lost the ability to speak a better way of communicating than current methods.

该装置由旧金山加州大学的科学家发明。
The device was developed by researchers from the University of California, San Francisco.

他们的研究结果最近发表在了《自然》期刊的一项研究中。
Their results were recently published in a study in the journal Nature.

科学家们发明了一种“大脑机器界面”,可以植入大脑中。
Scientists created a "brain machine interface" that is implanted in the brain.

发明该装置是为了读取并记录大脑信号,这些信号的作用是助力控制产出语言的肌肉。
The device was built to read and record brain signals that help control the muscles that produce speech.

这类肌肉包括但不限于嘴唇、喉头、舌头和下巴。
These include the lips, larynx, tongue and jaw.

该实验有2步。
The experiment involved a two-step process.

第1步:科学家通过“解码器”来将大脑的电子信号转换为人类声音活动的形式。
First, the researchers used a "decoder" to turn electrical brain signals into representations of human vocal movements.

然后,合成器会将这些声音转变为有声的句子。
A synthesizer then turns the representations into spoken sentences.

已经有一些大脑-计算机界面能够帮助无法独立说话的人。
Other brain-computer interfaces already exist to help people who cannot speak on their own.

这些系统在经过培训后,可以捕捉人类的眼球和面部活动,然后一点一点地产出人的想法。
Often these systems are trained to follow eye or facial movements of people who have learned to spell out their thoughts letter-by-letter.

但一些科学家认为,这种方法错误率很高,而且进程缓慢——每分钟只能产出大概10个有声单词。
But researchers say this method can produce many errors and is very slow, permitting at most about 10 spoken words per minute.

而人在自然的沟通中,每分钟可以产出100-150个单词。
This compares to between 100 and 150 words per minute used in natural speech.

未来的一些研究将测试这项技术是否适用于无法说话的人。
Future studies will test the technology on people who are unable to speak.

乔希·沙尔捷是旧金山加州大学的一位语音系统专家,他也是一位博士。
Josh Chartier is a speech scientist and doctoral student at the University of California, San Francisco.

他表示,该研究团队十分“震惊”,在首次听到合成的语音结果时,他们震撼不已。
He said the research team was "shocked" when it first heard the synthesized speech results.

该研究指出,产出的语音句子能为数百名听者所理解,这些听者要在听到句子后写出来。
The study reports the spoken sentences were understandable to hundreds of human listeners asked to write out what they heard.

他们听写这些句子的准确率高达43%。
The listeners were able to write out 43 percent of sentences with perfect accuracy.

该研究的科学家还指出,跟自然表达相同的是:这些听者听辨短句子的成功率最高。
The researchers noted that - as is the case with natural speech - listeners had the highest success rate identifying shorter sentences.

该研究团队在报告中还指出,在合成较慢语速的语音表达时,成功率更高,如“sh”,而在合成“b”或者“p”等更难的语音表达时,成功率则更低。
The team also reported more success synthesizing slower speech sounds like "sh," and less success with harder sounds like "b" or "p."

沙尔捷承认一点,那就是该系统还需要做更多的研究才能达到完美重现口头语言的目标。
Chartier admitted that much more research of the system will be needed to reach the goal of perfectly reproducing spoken language.

但他也补充表示,“本研究产出的语言准确度在实时沟通方面有惊人的提升,比当前的解决方案可用性更强。”
But he added: "The levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available."

布莱恩·林恩为您播报。
I’m Bryan Lynn.