Skip to main content

A neuroscientist explains the limits and possibilities of using technology to read our thoughts

A neuroscientist explains the limits and possibilities of using technology to read our thoughts

/

Brain activity doesn’t tell us what someone is experiencing

Share this story

In 2007, The New York Times published an op-ed titled “This is Your Brain on Politics.” The authors imaged the brains of swing voters and, using that information, interpreted what the voters were feeling about presidential candidates Hillary Clinton and Barack Obama.

“As I read this piece,” writes Russell Poldrack, “my blood began to boil.” Poldrack is a neuroscientist at Stanford University and the author of The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts (out now from Princeton University Press). His research focuses on what we can learn from brain imagining techniques such as fMRI, which measures blood activity in the brain as a proxy for brain activity. And one of the clearest conclusions, he writes, is that activity in a particular brain region doesn’t actually tell us what the person is experiencing.

The Verge spoke to Poldrack about the limits and possibilities of fMRI, the fallacies that people commit in interpreting its results, and the limits of its widespread use. This interview has been lightly edited for clarity.

When did “neuroimaging” start to be everywhere?

Russell Poldrack
Russell Poldrack
Photo: Lisa DeNeffe Photography

My guess is around 2007. There were results coming out around 2000 and 2001 that started to show that we can probably start to decode the contents of somebody’s mind from imaging. These were mostly focused on what the person was seeing, and that doesn’t seem shocking, I think. We know a lot about the visual system but it doesn’t seem uniquely human or conscious.

In 2007, there were a number of papers that showed that you can decode people’s intentions, like whether they were going to add or sutract numbers in the next few seconds, and that seemed like really conscious cognitive stuff. Maybe that was when brain reading really broke into awareness.

A lot of your book is about the limits of fMRI and neuroimaging, but what can it tell us?

It’s the best way we have of looking at human brains in action. It’s limited and it’s an indirect measure of neurons because you’er measuring blood flow instead of the neurons themselves. But if you want to study human brains, that works better than anything else in terms of pinpointing anything.

What are some of the technical challenges around fMRI?

The data are very complex and require a lot of processing to go from an MRI scanner to the things you see published in a scientific paper. And there are things like the fact that every human brain is slightly different and we have to work them all together to get them to match. The statistical analysis is very complex and there have been a set of controversies in the fMRI world about how statistics are being used and interpreted and misinterpreted. We’re doing so many tests, we have to make sure we’re not fooling ourselves with statistical flukes. The rate of false positive we try to enforce is 5 percent.

“I can decode if you’re seeing a cat or a house with pretty much perfect accuracy, but anything interestingly cognitive, we can’t decode.”

What about generalizability? How well can you generalize from one person’s results to say, “this happens in all humans”?

It depends on the nature of what you’re trying to generalize. There are large-scale things that we can make generalizations about. Pretty much every healthy adult human has visual processing going on in the back of the brain, stuff like that. But there’s a lot of fine-grained detail about each brain that gets lost. You can generalize coarse-grained things, but the minute you want to dig into finer-grained, you have to look at each individual more closely.

In the book, you talk a lot about the fallacy of “reverse inference.” What is that?

Reverse inference is the idea that presence of activity in some brain area tells you what the person is experiencing psychologically. For example, there’s a brain region called the ventral striatum. If you receive any kind of reward, like money or food or drugs, there will be greater activity in that part of the brain.

The question is, if we take somebody and we don’t know what they’re doing, but we see activity in that part of the brain, how strongly should we decide that the person must be experiencing reward? If reward was the only thing that caused that sort of activity, we could be pretty sure. But there’s not really any part of the brain that has that kind of one-to-one relationship with a particular psychological state. So you can’t infer from activity in a particular area what someone is actually experiencing.

You can’t say “we saw a blob of activity in the insula, so the person must be experiencing love.”

What would be the correct interpretation then?

The correct interpretation would be something like, “we did X and it’s one of the things that causes activity in the insula.”

But we also know that there are tools from statistics and machine learning that can let one quantify how well can you quantify something from something else. Using statistical analysis, you can say, “we can infer with 64 percent accuracy whether this person is experiencing X based on activity across the brain.”

Is reverse inference the most common fallacy when it comes to interpreting neuroscience results?

It’s by far the most common. I also think sometimes people can misinterpret what the activity means. We see pictures where it’s like, there’s one spot on the brain showing activity, but that doesn’t mean the rest of the brain is doing nothing.

You write about “neuromarketing,” or using neuroscience techniques to see if we can see the effect of marketing. What are some of the limits here?

It hasn’t been fully tested yet. Whenever you have science mixed with people trying to sell something — in this case, the people are trying to sell the technique of neuromarketing — that’s ripe for overselling. There’s not much widespread evidence really showing that it works. Recently there have been some studies suggesting you can use neuroimaging to improve the ability to figure out how effective an ad is going to be. But we don’t know how powerful it is yet.

Our ability to decode from brain imaging is so limited and the data are so noisy. Rarely can we decode with perfect accuracy. I can decode if you’re seeing a cat or a house with pretty much perfect accuracy, but anything interestingly cognitive, we can’t decode. But for companies, even if there’s just a 1 percent improvement in response to the ad, that could mean a lot of money, so a technique doesn’t have to be perfect to be useful for some kind of advantage. We don’t know how big the advantage will be.

One interesting point you make is that there are some issues with the increasingly common statement that addiction is a brain disease. What’s the issue here?

Addiction causes people to experience bad outcomes in life and so to that degree it’s like other diseases, right? It results directly from things going on in one’s brain. But I think calling it a “brain disease” makes it seem like it’s not a natural thing that brains should do.

Schizophrenia is a brain disease in the sense that most people behave very differently from someone with schizophrenia, whereas addiction I like to think of as a mismatch between the world we evolved in and the world we live in now. Lots of diseases, like obesity and type II diabetes, probably also have a lot of the same flavor.

We evolved this dopamine system meant to tell us to do more of things we like and less of things we don’t like. But then if you take stimulant drugs like cocaine, they operate directly on the dopamine system. They’re this evolutionarily unprecedented stimulus to that system that drives the development of new habits. So it’s really the brain doing the thing it was evolved to do, in an environment that it’s not prepared for.

Going back to reverse inference for a second, how long do you think it’ll be before we actually are able to decode psychological states?

It depends on what you’re trying to infer. Certain things are easier. If you are talking the overall ability to make reverse inference on any kind of mental state, I’m not sure that we’re going to be able to do that with the current brain imagining tools. There are just fundamental limits on fMRI in terms of its ability to see brain activity at the level that we might need to see it. It’s an open question and we’re certainly learning a lot about what you can predict and part of that is going to be development of better statistical models. Ultimately, fMRI is a limited window into the biology and without a better window into human brain function, it’s not clear to me that we will be able to get to perfect reverse inference with this tool.