Skip to main content

Researchers at The University of Texas at Austin have developed an artificial intelligence-based system, named a semantic decoder, which can translate brain activity into a continuous stream of text. Unlike previous attempts in this field, this novel approach is noninvasive and does not rely on surgical implants or predefined word lists.

Understanding the Semantic Decoder

The semantic decoder translates brain activity generated when a person is listening to a story or imagining narrating one. This breakthrough could be instrumental in aiding those with conditions such as stroke-induced speech loss to communicate effectively. The technology does not generate a verbatim transcript but aims to capture the essence of the words or thoughts expressed.

Published in Nature Neuroscience, the study was led by doctoral student Jerry Tang and assistant professor Alex Huth. Utilizing a transformer model akin to those in OpenAI’s ChatGPT and Google‘s Bard AI models, brain activity is measured using an fMRI scanner after the decoder undergoes extensive training. This training entails having the individual listen to hours of podcasts within the scanner.

Following the training, the participant’s brain activity when listening to a new story or imagining one is decoded into corresponding text. The decoded output, though not flawless, mirrors the intended meanings of the original words about half of the time. An example given showcased a listener’s thoughts translated from, “I don’t have my driver’s license yet,” to “She has not even started to learn to drive yet.”

Researchers are aware of potential misuse of this technology and have explicitly stated that decoding only functions with cooperative participants. Uncooperative or untrained individuals yield unintelligible results, further assuring the technology’s responsible use.

The paper also discusses its potential application in describing visual stimuli, as demonstrated by experiments where subjects watched silent videos.

From Lab to Life

Though currently limited to the laboratory due to the need for an fMRI machine, the team believes the decoder could be compatible with more portable brain-imaging systems like functional near-infrared spectroscopy (fNIRS). This advancement could facilitate practical, out-of-lab usage, enabling wider accessibility and application of the technology.

The research was funded by the Whitehall Foundation, the Alfred P. Sloan Foundation, and the Burroughs Wellcome Fund. A PCT patent application has been filed by Tang and Huth relating to this innovative work.