Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 11, 2022

Want to spot a deepfake video caller? Ask the suspect to turn sideways

Just try to not get arrested


Want to spot a deepfake video caller? Ask the suspect to turn sideways

Researchers have discovered a surprisingly simple way to detect deepfake video calls: ask the suspect to turn sideways.

The trick was shared this week by Metaphysic.ai, a London-based startup behind the viral  Tom Cruise deepfakes.

The company used DeepFaceLive, a popular app for video deepfakes, to transform a volunteer into various celebrities.

Most of the recreations were impressive when they looked straight-ahead. But once the faces rotated a full 90-degrees, the images became distorted and the spell was broken.

The recreations clearly failed when at a sharp 90° profile. Credit:Metaphysic.ai
The fakes fell apart at a sharp 90° profile. Credit: Metaphysic.ai

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The team believes the defects emerge because the software uses fewer reference points to estimate lateral views of faces. This makes the algorithm unable to guess how it would look.

“Typical 2D alignment packages consider a profile view to be 50% hidden, which hinders recognition, as well as accurate training and subsequent face synthesis,” Metaphysic.ai‘s Martin Anderson explained in a blog post.

“Frequently the generated profile landmarks will ‘leap out’ to any possible group of pixels that may represent a ‘missing eye’ or other facial detail that’s obscured in a profile view.”

The algorithms only have half as many landmarks for profiles as for front-on views. Credit: Metaphysic.ai

These weak spots can be strengthened, but it takes a lot of work.

YouTuber DesiFakes proved it was possible after adding a deepfake Jerry Seinfeld to a character in Pulp Fiction. But this required extensive post-processing. In addition, the profile-view of Seinfeld closely resembled the original actor.

Yet this is hard to replicate for the general public, because we’re rarely filmed or photographed in profile — unless we get arrested.

This can leave deepfake models with insufficient training data to generate realistic lateral views.

Alexis Arquette (left), who played the part in the film, is the spitting side-view image of Seinfeld (right).
Alexis Arquette (left), is the spitting side-view image of Seinfeld (right). Credit: DesiFakes

Metaphysic.ai’s research emerges amid growing concerns about deepfake video calls.

In June, several European mayors were duped by a video call from a fake Vitali Klitschko.

Days later, the FBI warned that scammers were using deepfakes in interviews for fully-remote jobs that offer access to valuable information.

The side-on trick may not have saved all of the victims. Future 3D landmark systems may produce convincing profile views, while photorealistic CGI models could replace entire heads.

Nonetheless, the side-view trick adds a new chance to detect the fakers — and another reason to not get arrested.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with