This article was published on March 22, 2024

Italian PM seeks justice for deepfake porn video amid surge in cases

Nearly two-thirds of women fear falling victim of deepfake pornography


Italian PM seeks justice for deepfake porn video amid surge in cases

Italy’s prime minister Giorgia Meloni is seeking €100,000 in damages after deepfake pornographic videos of her were shared online. 

Meloni is seeking compensation from a 40-year-old and his father over the deepfakes, which were viewed millions of times. The deepfake porn videos were uploaded prior to her appointment as prime minister in 2022. 

If successful, the PM has vowed to donate the money to a fund to support women who have been victims of gender-based violence.

While officials in this case were able to identify the perpetrators, who may now face jail time, most go under the radar. The creators and sharers of deepfake imagery are notoriously difficult to track down.   

In 2016, researchers identified just a single deepfake porn video online. In the first three quarters of 2023 alone, 143,733 new deepfake porn videos were uploaded, according to a new investigation by Channel 4 News.

As part of the probe, the British broadcaster found videos of 4,000 famous individuals on the top 40 most popular sites for this kind of content. Of those, 250 were from the UK, including Cathy Newman, a presenter from Channel 4 News itself. 

“It feels like a violation. It just feels really sinister that someone out there who’s put this together, I can’t see them, and they can see this kind of imaginary version of me, this fake version of me,” Newman said. 

“You can’t unsee that. That’s something that I’ll keep returning to. And just the idea that thousands of women have been manipulated in this way. It feels like an absolutely gross intrusion and violation,” she continued. 

“It’s really disturbing that you can, at a click of a button, find this stuff, and people can make this grotesque parody of reality with absolute ease.”

The proliferation of AI tools has made it easier than ever before to create deepfake porn videos, which superimpose an image of someone’s face onto the body of another. 

Just this week, Dutch news channel AD uncovered a deluge of deepfake porn videos featuring dozens of Dutch celebrities, parliamentarians, and members of the Royal Family — all of them women. 

The most high-profile case of the year came last month, when explicit, non-consensual deepfake images of Taylor Swift flooded X, formerly Twitter. One of the videos racked up 47 million views before it was removed 17 hours later.  

While instances where celebrities are affected get the most press attention, this is a problem affecting women (and sometimes children) from all walks of life. Nearly two-thirds of women fear falling victim to deepfake pornography, according to a report by cybersecurity firm ESET published Wednesday.

“Digital images are nearly impossible to truly delete, and it is easier than ever to artificially generate pornography with somebody’s face on it,” said Jake Moore, advisor at ESET. 

In the UK, the incoming Online Safety Bill prohibits the sharing of deepfake pornography. However, in general, the law is struggling to keep up. In the EU, despite a number of incoming regulations targeting AI and social media, there are no specific laws protecting victims of non-consensual deepfake pornography. 

Many are now looking to AI companies to crack down on the creation of deepfake porn, and social media giants to control their spread online. However, relying on tech companies — who rake in ad revenue from the flurry of online activity on their platforms — to do the right thing may not be the best strategy. 

While we wait for the law to catch up, technologies like authentication systems, digital watermarking, and blockchain could help tackle and trace deepfakes — making us all more secure online.

“What we need is a comprehensive, multi-dimensional global collaboration strategy emphasising regulation, technology, and security,” Mark Minevich, author of Our Planet Powered by AI and a UN advisor on AI technology, previously told TNW.  

“This will not only confront the immediate challenges of non-consensual deepfakes but also sets a foundation for a digital environment characterised by trust, transparency, and enduring security.” 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with