AI & futurism

powered by

This article was published on April 15, 2021

Twitter will reveal how its algorithmic biases cause ‘unintended harms’

A new initiative will investigate Twitter's ML decisions

Twitter will reveal how its algorithmic biases cause ‘unintended harms’
Thomas Macaulay
Story by

Thomas Macaulay

Writer at Neural by TNW Writer at Neural by TNW

Twitter has launched a new initiative called “Responsible ML” that will investigate the harms caused by the platform’s algorithms.

The company said on Wednesday that it will use the findings to improve the experience on Twitter:

This may result in changing our product, such as removing an algorithm and giving people more control over the images they Tweet, or in new standards into how we design and build policies when they have an outsized impact on one particular community.

The move comes amid mounting concerns around social media algorithms amplifying biases and spreading conspiracy theories.

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

[Read: The biggest tech trends of 2021, according to 3 founders]

A recent example of this on Twitter involved an image cropping algorithm that automatically prioritized white faces over Black ones.

Twitter said the image-cropping algorithm will be analyzed by the Responsible ML team.

They’ll also conduct a fairness assessment of Twitter’s timeline recommendations across racial subgroups, and study content recommendations for different political ideologies in seven countries.

Cautious optimism

Tech firms are often accused of using responsible AI initiatives to divert criticism and regulatory intervention. But Twitter’s new project has attracted praise from AI ethicists.

Margaret Mitchell, who co-led Google’s ethical AI time before her controversial firing in February, commended the initiative’s approach.

Twitter’s recent hiring of Rumman Chowdhury has also given the project some credibility.

Chowdhury, a world-renowned expert in AI ethics, was appointed director of ML Ethics, Transparency & Accountability (META) at Twitter in February.

In a blog post, she said Twitter will share the learnings and best practices from the initiative:

This may come in the form of peer-reviewed research, data-insights, high-level descriptions of our findings or approaches, and even some of our unsuccessful attempts to address these emerging challenges. We’ll continue to work closely with third party academic researchers to identify ways we can improve our work and encourage their feedback.

She added that her team is building explainable ML solutions to show how the algorithms work. They’re also exploring ways to give users more control over how ML shapes their experience.

Not all the work will translate into product changes, but it will hopefully at least provide some transparency into how Twitter’s algorithms work.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with