Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 28, 2020

New algorithm can identify misogyny on Twitter

The AI analyzes the context of a tweet to distinguish between abuse and sarcasm


New algorithm can identify misogyny on Twitter Image by: Esther Vargas

Researchers from the Queensland University of Technology (QUT) in Australia have developed an algorithm that detects misogynistic content on Twitter.

The team developed the system by first mining 1 million tweets. They then refined the dataset by searching the posts for three abusive keywords: whore, slut, and rape.

Next, they categorized the remaining 5,000 tweets as either misogynistic or not, based on their context and intent. These labeled tweets were then fed to a machine learning classifier, which used the samples to create its own classification model.

The system uses a deep learning algorithm to adjust its knowledge of terminology as language evolves. While the AI built up its vocabulary, the researchers monitored the context and intent of the language, to help the algorithm differentiate between abuse, sarcasm, and “friendly use of aggressive terminology.”

“Take the phrase ‘get back to the kitchen’ as an example — devoid of context of structural inequality, a machine’s literal interpretation could miss the misogynistic meaning,” said Professor Richi Naya, a co-author of the study.

“But seen with the understanding of what constitutes abusive or misogynistic language, it can be identified as a misogynistic tweet.”

[Read: 4 ridiculously easy ways you can be more eco-friendly]

Nayak said this enabled the system to understand different contexts just by analyzing text, and without the help of tone.

We were very happy when our algorithm identified ‘go back to the kitchen’ as misogynistic — it demonstrated that the context learning works.

The researchers say the model identifies misogynistic tweets with 75% accuracy. It could also be adjusted to spot racism, homophobia, or abuse of disabled people.

The team now wants social media platforms to develop their research into an abuse detection tool.

“At the moment, the onus is on the user to report abuse they receive,” said Naya. “We hope our machine-learning solution can be adopted by social media platforms to automatically identify and report this content to protect women and other user groups online.”

You can read the research paper on the Springer database of academic journals.

So you like our media brand Neural? You should join our Neural event track at TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with