You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on June 9, 2022

Scientists say AI can tell your politics from a brain scan — here’s why that’s BS

This is either the greatest AI invention in the history of computer science.... or it's BS — you decide


Scientists say AI can tell your politics from a brain scan — here’s why that’s BS

A team of researchers using what they call “state-of-the-art artificial intelligence techniques” has reportedly created a system capable of identifying a human’s political ideology by viewing their brain scans.

Wow! This is either the most advanced AI system in the entire known universe or it’s a total sham.

Unsurprisingly, it’s a sham: There’s little reason for excitement. You don’t even have to read the researcher’s paper to debunk their work. All you need is the phrase “politics change,” and we’re done here.

But, just for fun, let’s actually get into the paper and explain how prediction models work.

The experiment

A team of researchers from Ohio State University, University of Pittsburgh, and New York University gathered 174 US college students (median age 21) — the vast majority of whom self-identified as liberal — and conducted brain scans on them while they completed a short battery of tests.

Per the research paper:

Each participant underwent 1.5 hours of functional MRI recording, which consisted of eight tasks and resting-state scans using a 12-channel head coil.

In essence, the researchers grabbed a bunch of young people, asked them their politics, and then designed a machine that flips a coin to “predict” a person’s politics. Only, instead of actually flipping a coin, it uses algorithms to supposedly parse brainwave data to do what’s essentially the same thing.

The problem

The AI has to predict either “liberal” or “conservative,” and, in systems such as these, there’s no option for “neither.”

So, right off the bat: the AI isn’t predicting or identifying politics. It’s forced to choose between the data in column A or the data in column B.

Let’s say I sneak into the Ohio State University AI center and scramble all their data up. I replace all the brainwaves with Rick and Morty memes and then I hide my tracks so the humans can’t see it.

As long as I don’t change the labels on the data, the AI will still predict whether the experiment subjects are conservative or liberal.

You either believe that the machine has magical data powers that can arrive at a ground truth regardless of the data it’s given, or you recognize that the illusion remains the same regardless of what kind of rabbits you put in the hat.

That 70% accuracy number is incorrect

A machine that is 70% accurate at guessing a human’s politics is always 0% accurate at determining them. This is because human political ideologies do not exist as ground truths. There is no conservative brain or liberal brain. Many people are neither or an amalgam of both. Furthermore, many people who identify as liberal actually possess conservative views and mindsets, and vice versa.

So the first problem we run into is that the researchers do not define “conservatism” or “liberalism.” They allow the subjects they are studying to define it for themselves — let’s keep in mind that the students have a median age of 21.

What that means, ultimately, is that the data and labels have no respect for one another. The researchers ultimately built a machine that always has a 50/50 chance of guessing which of two labels they’ve placed on any given dataset.

It doesn’t matter if the machine looks for signs of conservatism in brainwaves, homosexuality in facial expressions, or whether or not someone is likely to commit a crime based on the color of their skin, these systems all work the exact same way.

They must brute force an inference, so they do. They may only choose from prescribed labels, so they do. And the researchers have no clue how it all works because they are black box systems, thus it’s impossible to determine exactly why the AI makes any given inference.

What is accuracy?

These experiments don’t exactly pit humans against machines. They really just establish two benchmarks and then conflate them.

The scientists will give multiple humans the prediction task one or two times (depending on the controls). Then they’ll give the AI the prediction task hundreds, thousands, or millions of times.

The scientists don’t know how the machine will come to its predictions, so they can’t just punch in the ground truth parameters and call it a day.

They have to train the AI. This involves giving it the exact same task — say, parsing the data from a couple hundred brain scans — and making it run the exact same algorithms over and over.

If the machine were to inexplicably get 100% on the first try, they’d call it a day and say it was perfect! Even though they’d have no clue why — remember, this all happens in a black box.

And, as is more often the case, if it fails to meet a significant threshold, they keep tweaking the algorithm’s parameters until it gets better. You can visualize this by imagining a scientist tuning a radio signal in through static, without looking at the dial. 

BS in, BS out

Now, think about the fact that this particular machine only gets it right about 7 out of 10 times. That’s the best the team could do. They couldn’t tweak it any better than that.

There’s less than 200 people in its dataset, and it already has a 50/50 chance to guess correctly without any data whatsoever.

So feeding it all this fancy brainwave data gives it a meager 20% bump in accuracy over base chance. And that only comes after a team of researchers from three prestigious universities combined their efforts to create what they call “state-of-the-art artificial intelligence techniques.”

By comparison, if you gave a human a dataset of 200 unique, unlabeled symbols, with each symbol carrying a hidden label of either 1 or 0, the average person could probably memorize the dataset after a relatively small number of iterations given only the single parameter of whether they guessed correctly as feedback. 

Think about the biggest sports fan you know, how many players can they remember by team name and jersey number alone over the history of the sport?

Humans could achieve 100% accuracy at memorizing a binary in a database of 200, given enough time.

But the AI and the human would suffer from the exact same problem if you gave them a new dataset: they’d have to start all over from scratch. Given an entirely new dataset of brainwaves and labels, it’s almost certain the AI would fail to meet the same level of accuracy without further adjustment.

Benchmarking this particular prediction model is exactly as useful as measuring a tarot card reader’s accuracy.

Good research, bad framing

That isn’t to say this research doesn’t have merit. I wouldn’t talk shit about a research team dedicated to exposing the flaws inherent to artificial intelligence systems. You don’t get mad at a security researcher who discovers a problem.

Unfortunately, that’s not how this research is framed.

Per the paper:

Although the direction of causality remains unclear – do people’s brains reflect the political orientation they choose or do they choose their political orientation because of their functional brain structure – the evidence here motivates further scrutiny and followup analyses into the biological and neurological roots of political behavior.

This is, in my opinion, borderline quackery. The implication here is that, like homosexuality or autism, a person may not be able to choose their own political ideology. Alternately, it seems to suggest that our very brain chemistry can be reconfigured by the simple act of subscribing to a predefined set of political viewpoints — and by the young age of 21 no less!

This experiment relies on a tiny bit of data from a miniscule pool of humans who, from what we can tell, are demographically similar. Furthermore, its results cannot be validated in any sense of the scientific method. We’ll never know why or how the machine made any of its predictions.

We need research like this, to test the limits of exploitation when it comes to these predictive models. But pretending this research has resulted in anything more sophisticated than the “Not Hotdog” app is dangerous.

This isn’t science, it’s prestidigitation with data. And framing it as a potential breakthrough in our understanding of the human brain only serves to carry water for all the AI scams — such as predictive policing — that rely on the exact same technology to do harm.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with