Alphabet subsidiary Jigsaw has unveiled a free image verification tool called Assembler, designed to help fact-checkers and journalists identify manipulated media.
The “early stage experimental platform” blends multiple image detection models into a single tool that can identify various forms of manipulation.
These models were provided by academics from the University of Maryland, University Federico II of Naples, and the University of California, Berkeley.
Jigsaw has also built two of its own detectors to use in Assembly. The first is a synthetic media detector that uses machine learning to identify deepfakes. It does this by comparing images of real people with fake ones produced by the StyleGAN architecture. The second creation is an ensemble model, which was trained using signals from multiple detectors to simultaneously search for various forms of manipulation.
[Read: Twitter’s new manipulated media rules leave a lot of gray area]
Jigsaw claims that this combination of detection methods produces more accurate results than any individual detector.
Developments in disinformation
Jared Cohen, Jigsaw’s chief executive, tweeted that Assembler “helps journalists & fact-checkers detect manipulated images, which are a growing threat to the public conversation.” However, the company has no plans to release the tool to the public, according to a report in the New York Times.
Critics have questioned the decision to make the tech sector and media the arbiters of truth at a time of growing mistrust in both industries.
Why not for the public, @jigsaw? Ordinary people should be given the chance to defend themselves against manipulation, too. Sell it, if you need to.> Alphabet’s Jigsaw unveils a tool to help journalists spot deepfakes and manipulated images https://t.co/AnPU7eEmh9
— Alessandro Perilli ✪ AI│Automation│Cybersecurity (@giano) February 4, 2020
Assembler is currently being tested with news outlets and fact-checkers including Agence France-Presse, Animal Politico, Code for Africa, Les Décodeurs du Monde, and Rappler.
Jigsaw has revealed some of the challenges exposed in by its early trials. They include debunking images that are underrepresented in the company’s training sets, such as screenshots of other screenshots and images that have been reformatted or shrunk.
Other problems involved checking low-resolution or small images from social media and instant messages, and a lack of clarity over the strengths and weaknesses of each individual detector.
Jigsaw’s tumultuous journey
Assembler is the latest addition to Jigsaw’s portfolio of products and experiments launched to address disinformation, as well as online abuse and harassment, censorship, and violent extremism.
The companywas launched in 2010 as Google Ideas, an in-house think tank, before rebranding as technology incubator Jigsaw in 2016.
“The team’s mission is to use technology to tackle the toughest geopolitical challenges”, then-Alphabet chief executive Eric Schmidt wrote in a blog post announcing the changes.
Jigsaw has struggled at times to fulfill this grand ambition.
Last year, employees of the company told Motherboard that founder Jared Cohen had a “white savior complex” and that abuse of staff had “been so great that there’s now a support group for people to get out of the fucking team.”
Jigsaw has also been criticised for a controversial experiment it conducted in 2018 to prove how easily disinformation can spread. The company created a fake website called “Down With Stalin” and then paid a Russian troll service $250 to conduct a two-week disinformation campaign attacked the site.
“The biggest risk is that this experiment could be spun as ‘Google meddles in Russian culture and politics.’ It fits anti-American clichés perfectly,” Johns Hopkins University political scientist Thomas Rid told Wired. “Didn’t they see they were tapping right into that narrative?”
Get the TNW newsletter
Get the most important tech news in your inbox each week.