This article was published on March 23, 2021

How Deepfakes could help implant false memories in our minds

It's startlingly easy


How Deepfakes could help implant false memories in our minds

The human brain is a complex, miraculous thing. As best we can tell, it’s the epitome of biological evolution. But it doesn’t come with any security software preinstalled. And that makes it ridiculously easy to hack.

We like to imagine the human brain as a giant neural network that speaks its own language. When we talk about developing brain-computer interfaces we’re usually discussing some sort of transceiver that interprets brainwaves. But the fact of the matter is that we’ve been hacking human brains since the dawn of time.

Think about the actor who uses a sad memory to conjure tears or the detective who uses reverse psychology to draw out a suspect’s confession. These examples may seem less extraordinary than, say, the memory-eraser from Men in Black. But the end result is essentially the same. We’re able to edit the data our minds use to establish base reality. And we’re really good at it.

Background

A team of researchers from universities in Germany and the UK today published pre-print research detailing a study in which they successfully implanted and removed false memories in test subjects.

Per the team’s paper:

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Human memory is fallible and malleable. In forensic settings in particular, this poses a challenge because people may falsely remember events with legal implications that never actually happened. Despite an urgent need for remedies, however, research on whether and how rich false autobiographical memories can be reversed under realistic conditions (i.e., using reversal strategies that can be applied in real-world settings) is virtually nonexistent.

Basically, it’s relatively easy to implant false memories. Getting rid of them is the hard part.

The study was conducted on 52 subjects who agreed to allow the researchers to attempt to plant a false childhood memory in their minds over several sessions. After awhile, many of the subjects began to believe the false memories. The researchers then asked the subjects’ parents to claim the false stories were true.

The researchers discovered that the addition of a trusted person made it easier to both embed and remove false memories.

Per the paper:

The present study therefore not only replicates and extends previous demonstrations of false memories but, crucially, documents their reversibility after the fact: Employing two ecologically valid strategies, we show that rich but false autobiographical memories can mostly be undone. Importantly, reversal was specific to false memories (i.e., did not occur for true memories).

False memory planting techniques have been around for awhile, but there hasn’t been much research on reversing them. Which means this paper comes not a moment too soon.

Enter Deepfakes

There aren’t many positive use cases for implanting false memories. But, luckily, most of us don’t really have to worry about being the target of a mind-control conspiracy that involves being slowly led to believe a false memory over several sessions with our own parents’ complicity.

Yet, that’s almost exactly what happens on Facebook every day. Everything you do on the social media network is recorded and codified in order to create a detailed picture of exactly who you are. This data is used to determine which advertisements you see, where you see them, and how frequently they appear. And when someone in your trusted network happens to make a purchase through an ad, you’re more likely to start seeing those ads.

But we all know this already right? Of course we do, you can’t go a day without seeing an article about how Facebook and Google and all the other big tech companies are manipulating us. So why do we put up with it?

Well, it’s because our brains are better at adapting to reality than we give them credit for. The moment we know there’s a system we can manipulate, the more we think the system says something about us as humans.

A team of Harvard researchers wrote about this phenomenon back in 2016:

In one study we conducted with 188 undergraduate students, we found that participants were more interested in buying a Groupon for a restaurant advertised as sophisticated when they thought the ad had been targeted to them based on specific websites they had visited during an earlier task (browsing the web to make a travel itinerary) compared to when they thought the ad was targeted based on demographics (their age and gender) or not targeted at all.

What does this have to do with Deepfakes? It’s simple: if we’re so easily manipulated through tidbits of exposure to tiny little ads in our Facebook feed, imagine what could happen if advertisers started hijacking the personas and visages of people we trust?

You might not, for example, plan on purchasing some Grandma’s Cookies products anytime soon, but if it was your grandma telling you how delicious they are in the commercial you’re watching… you might.

Using existing technology it would be trivial for a big tech company to, for example, determine you’re a college student who hasn’t seen their parents since last December. With this knowledge, Deepfakes, and the data it already has on you, it wouldn’t take much to create targeted ads featuring your Deepfaked parents telling you to buy hot cocoa or something.

But false memories?

It’s all fun and games when the stakes just involve a social media company using AI to convince you to buy some goodies. But what happens when it’s a bad actor breaking the law? Or, worse, what happens when it’s the government not breaking the law?

Police use a variety of techniques to solicit confessions. And law enforcement are generally under no obligation to tell the truth when doing so. In fact, it’s perfectly legal in most places for cops to outright lie in order to obtain a confession.

One popular technique involves telling a suspect that their friends, families, and any co-conspirators have already told the police they know it was them who committed the crime. If you can convince someone that the people they respect and care about believe they’ve done something wrong, it’s easier for them to accept it as a fact.

How many law enforcement agencies in the world currently have an explicit policy against using manipulated media in the solicitation of a confession? Our guess would be: close to zero.

And that’s just one example. Imagine what an autocratic or iron-fisted government could do at scale with these techniques.

The best defense…

It’s good to know there are already methods we can use to extract these false memories. As the European research team discovered, our brains tend to let go of the false memories when challenged but cling to the real ones. This makes them more resilient against attack than we might think.

However it does put us perpetually on the defensive. Currently, our only defense against AI-assisted false memory implantation is to either see it coming or get help after it happens.

Unfortunately the unknown unknowns make that a terrible security plan. We simply can’t plan for all the ways a bad actor could exploit the loophole that makes it easier to edit our brains when someone we trust is helping the process along.

With Deepfakes and enough time, you could convince someone of just about anything as long as you can figure out a way to get them to watch your videos. 

Our only real defense is to develop technology that sees through Deepfakes and other AI-manipulated media. With brain-computer interfaces set to hit consumer markets within the next few years and AI-generated media becoming less distinguishable from reality by the minute, we’re closing in on a point of no return for technology.

Just like the invention of the firearm made it possible for those unskilled in sword fighting to win a duel and the creation of the calculator gave those who struggle with math the ability to perform complex calculations, we may be on the cusp of an era where psychological manipulation becomes a push-button enterprise.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with