AI Is Unlocking the Human Brain’s Secrets

Language models similar to ChatGPT have started to transform neuroscience.

Illustrations of the brain
Illustration by The Atlantic. Source: Getty.

If you are willing to lie very still in a giant metal tube for 16 hours and let magnets blast your brain as you listen, rapt, to hit podcasts, a computer just might be able to read your mind. Or at least its crude contours. Researchers from the University of Texas at Austin recently trained an AI model to decipher the gist of a limited range of sentences as individuals listened to them—gesturing toward a near future in which artificial intelligence might give us a deeper understanding of the human mind.

The program analyzed fMRI scans of people listening to, or even just recalling, sentences from three shows: Modern Love, The Moth Radio Hour, and The Anthropocene Reviewed. Then, it used that brain-imaging data to reconstruct the content of those sentences. For example, when one subject heard “I don’t have my driver’s license yet,” the program deciphered the person’s brain scans and returned “She has not even started to learn to drive yet”—not a word-for-word re-creation, but a close approximation of the idea expressed in the original sentence. The program was also able to look at fMRI data of people watching short films and write approximate summaries of the clips, suggesting the AI was capturing not individual words from the brain scans, but underlying meanings.

The findings, published in Nature Neuroscience earlier this month, add to a new field of research that flips the conventional understanding of AI on its head. For decades, researchers have applied concepts from the human brain to the development of intelligent machines. ChatGPT, hyperrealistic-image generators such as Midjourney, and recent voice-cloning programs are built on layers of synthetic “neurons”: a bunch of equations that, somewhat like nerve cells, send outputs to one another to achieve a desired result. Yet even as human cognition has long inspired the design of “intelligent” computer programs, much about the inner workings of our brains has remained a mystery. Now, in a reversal of that approach, scientists are hoping to learn more about the mind by using synthetic neural networks to study our biological ones. It’s “unquestionably leading to advances that we just couldn’t imagine a few years ago,” says Evelina Fedorenko, a cognitive scientist at MIT.

The AI program’s apparent proximity to mind reading has caused uproar on social and traditional media. But that aspect of the work is “more of a parlor trick,” Alexander Huth, a lead author of the Nature study and a neuroscientist at UT Austin, told me. The models were relatively imprecise and fine-tuned for every individual person who participated in the research, and most brain-scanning techniques provide very low-resolution data; we remain far, far away from a program that can plug into any person’s brain and understand what they’re thinking. The true value of this work lies in predicting which parts of the brain light up while listening to or imagining words, which could yield greater insights into the specific ways our neurons work together to create one of humanity’s defining attributes, language.

Successfully building a program that can reconstruct the meaning of sentences, Huth said, primarily serves as “proof-of-principle that these models actually capture a lot about how the brain processes language.” Prior to this nascent AI revolution, neuroscientists and linguists relied on somewhat generalized verbal descriptions of the brain’s language network that were imprecise and hard to tie directly to observable brain activity. Hypotheses for exactly what aspects of language different brain regions are responsible for—or even the fundamental question of how the brain learns a language—were difficult or even impossible to test. (Perhaps one region recognizes sounds, another deals with syntax, and so on.) But now scientists could use AI models to better pinpoint what, precisely, those processes consist of. The benefits could extend beyond academic concerns—assisting people with certain disabilities, for example, according to Jerry Tang, the study’s other lead author and a computer scientist at UT Austin. “Our ultimate goal is to help restore communication to people who have lost the ability to speak,” he told me.

There has been some resistance to the idea that AI can help study the brain, especially among neuroscientists who study language. That’s because neural networks, which excel at finding statistical patterns, seem to lack basic elements of how humans process language, such as an understanding of what words mean. The difference between machine and human cognition is also intuitive: A program like GPT-4, which can write decent essays and excels at standardized tests, learns by processing terabytes of data from books and webpages, while children pick up a language with a fraction of 1 percent of that amount of words. “Teachers told us that artificial neural networks are really not the same as biological neural networks,” the neuroscientist Jean-Rémi King told me of his studies in the late 2000s. “This was just a metaphor.” Now leading research on the brain and AI at Meta, King is among many scientists refuting that old dogma. “We don’t think of this as a metaphor,” he told me. “We think of [AI] as a very useful model of how the brain processes information.”

In the past few years, scientists have shown that the inner workings of advanced AI programs offer a promising mathematical model of how our minds process language. When you type a sentence into ChatGPT or a similar program, its internal neural network represents that input as a set of numbers. When a person hears the same sentence, fMRI scans can capture how the neurons in their brain respond, and a computer is able to interpret those scans as basically another set of numbers. These processes repeat on many, many sentences to create two enormous data sets: one of how a machine represents language, and another for a human. Researchers can then map the relationship between these data sets using an algorithm known as an encoding model. Once that’s done, the encoding model can begin to extrapolate: How the AI responds to a sentence becomes the basis for predicting how neurons in the brain will fire in response to it, too.

New research using AI to study the brain’s language network seems to appear every few weeks. Each of these models could represent “a computationally precise hypothesis about what might be going on in the brain,” Nancy Kanwisher, a neuroscientist at MIT, told me. For instance, AI could help answer the open question of what exactly the human brain is aiming to do when it acquires a language—not just that a person is learning to communicate, but the specific neural mechanisms through which communication comes about. The idea is that if a computer model trained with a specific objective—such as learning to predict the next word in a sequence or judge a sentence’s grammatical coherence—proves best at predicting brain responses, then it’s possible the human mind shares that goal; maybe our minds, like GPT-4, work by determining what words are most likely to follow one another. The inner workings of a language model, then, become a computational theory of the brain.

These computational approaches are only a few years old, so there are many disagreements and competing theories. “There is no reason why the representation you learn from language models has to have anything to do with how the brain represents a sentence,” Francisco Pereira, the director of machine learning for the National Institute of Mental Health, told me. But that doesn’t mean a relationship cannot exist, and there are various ways to test whether it does. Unlike the brain, scientists can take apart, examine, and manipulate language models almost infinitely—so even if AI programs aren’t complete hypotheses of the brain, they are powerful tools for studying it. For instance, cognitive scientists can try to predict the responses of targeted brain regions, and test how different types of sentences elicit different types of brain responses, to figure out what those specific clusters of neurons do “and then step into territory that is unknown,” Greta Tuckute, who studies the brain and language at MIT, told me.

For now, the utility of AI may not be to precisely replicate that unknown neurological territory, but to devise heuristics for exploring it. “If you have a map that reproduces every little detail of the world, the map is useless because it’s the same size as the world,” Anna Ivanova, a cognitive scientist at MIT, told me, invoking a famous Borges parable. “And so you need abstraction.” It is by specifying and testing what to keep and jettison—choosing among streets and landmarks and buildings, then seeing how useful the resulting map is—that scientists are beginning to navigate the brain’s linguistic terrain.

Matteo Wong is an associate editor at The Atlantic.