Twitter Abuse Toward Women Is Rampant, Amnesty Report Says

Credit to Author: Emily Dreyfuss| Date: Tue, 18 Dec 2018 05:00:00 +0000

For many women, especially journalists, politicians, and other public figures, Twitter is something to endure. Many have accounts out of professional necessity, but the cost of their participation in Twitter discourse is often attacks, threats, and harassment. Women learn to block, mute, report, and ignore their mentions. Some tweet directly at Twitter CEO Jack Dorsey, frustrated that he seems never to take the problem of abuse against women on the site seriously. He rarely answers them directly.

Amnesty International considers such online abuse against women a human rights issue, and has repeatedly called on Twitter to release “meaningful information about reports of violence and abuse against women, as well as other groups, on the platform, and how they respond to it.” Twitter refused. So, Amnesty took matters into its own hands. On Tuesday, it launches an interactive website detailing the results of a crowdsourced study into harassment against women on Twitter, which it undertook in partnership with Element AI, an artificial intelligence company.

“We have built the world’s largest crowdsourced data set about online abuse against women,” Milena Marin, senior adviser for tactical research at Amnesty International, said in a statement. “We have the data to back up what women have long been telling us—that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked.”

The study looked at 778 women journalists and politicians in the US and UK, and found that 7.1 percent of tweets sent to them last year were abusive or problematic. The journalists and politicians received abuse at similar rates, and women were targeted on both the right and the left. Women of color in the study were 34 percent more likely to be the targets of harassment than white women. Black women were targeted most of all: One in every 10 tweets sent to them was abusive or problematic, whereas for white women it was one in 15.

“We found that, although abuse is targeted at women across the political spectrum, women of color were much more likely to be impacted and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices,” according to Marin.

"Abuse, malicious automation, and manipulation detract from the health of Twitter," Vijaya Gadde, Twitter’s legal, policy, and trust and safety lead, wrote in a response to Amnesty, which was provided to WIRED. "We are committed to holding ourselves publicly accountable toward progress in this regard."

“We have the data to back up what women have long been telling us—that Twitter is a place where racism, misogyny, and homophobia are allowed to flourish basically unchecked.”

Milena Marin, Amnesty International

Amnesty’s Troll Patrol project relied on a combination of crowdsourcing and machine learning. More than 6,500 volunteers from 150 countries helped label a subset of 288,000 tweets (out of 14.5 million) that had been sent to the 778 women between January and December of 2017. The volunteers were trained to spot abusive tweets—tweets that promote violence against or threats to people based on their identification with a group, like race or gender, which violates Twitter’s TOS—and problematic tweets, which Amnesty defines as “hurtful or hostile content,” like negative stereotyping, that does “not necessarily meet the threshold of abuse.” Three experts also analyzed a smaller sample of 1,000 tweets.

Element AI then used the expert and crowdsourced data to extrapolate how much abuse the 778 women faced on Twitter overall. Their model estimated that of the 14.5 million tweets mentioning the women, 1.1 million were abusive or problematic. That’s a problematic or abusive tweet every 30 seconds.

The Troll Patrol's findings on race stand out most. Of the 778 journalists and politicians, black women were 84 percent more likely to be targets of abusive tweets than white women, and 60 percent more likely to receive problematic tweets. Asian women were the most likely to receive threats mentioning ethnic, racial, and religious slurs. Latinx women were slightly less likely to receive any abusive or problematic tweets than white women, but the abuse they received was 81 percent more likely to be physically and specifically threatening. (More details on the study's methodology are available online.)

The study also found that the left-leaning politicians analyzed in both the US and the UK faced 23 percent more abusive and problematic tweets than politicians from parties on the right. The opposite was true for the media. “Journalists working for right-leaning media groups like Daily Mail, the Sun or Breitbart were mentioned in 64 percent more problematic and abusive tweets than journalists working at left leaning organizations like The New York Times or the Guardian,” the study says.

To be sure, the study isn’t a comprehensive encapsulation of the harassment women face online. The authors note that the specific findings only apply to this group of women, and “would likely differ if applied to other professions, countries, or the wider population.” The study also categorized the women’s race based on publicly available information, which the authors admit is “crude” and “not necessarily a reflection of how each of the 778 women self-identify.” A similar caveat applies to political affiliation, which was based on each woman’s party for politicians, or for journalists, her news outlet as rated by a media bias group.

The study also relied on the public Twitter data available to download from the platform in March 2018. Any tweets that were deleted or flagged as abusive prior to Troll Patrol gathering them from Twitter’s firehose on that date would not have been included in the analysis. As such, the authors say, the rates of abusive tweets are likely higher.

Twitter's Gadde also took issue with the way Amnesty defined "problematic" tweets, writing: “We would welcome further discussion about how you have defined 'problematic' as part of this research in accordance with the need to protect free expression and ensure policies are clearly and narrowly drafted.” The report does acknowledge that "problematic tweets may qualify as legitimate speech and would not necessarily be subject to removal from the platform," adding that "we included problematic tweets because it is important to highlight the breadth and depth of toxicity on Twitter in its various forms, and to recognize the cumulative effect that problematic content may have on the ability of women to freely expressing themselves on the platform."

What is abundantly clear is the sheer scale of the abuse against women on Twitter. Over the past year, Twitter has pledged to improve the health of its platform, although progress on that front has been uneven so far. Amnesty hopes the data set can be used to help social media platforms, including Twitter, develop better tools to protect women.

The point of the study is not only to put hard data behind what women have been saying for years about their experiences on Twitter, but also to demonstrate the power and limitations of AI in online content moderation. On Tuesday, Amnesty and Element AI also unveiled a machine-learning tool, trained on the project data, which tries to automatically identify abusive tweets. The automated content moderation tool works pretty well, the researchers say, but it’s not perfect. “It still achieves about a 50 percent accuracy level when compared to the judgement of our experts,” the report states, “meaning it identifies two in every 14 tweets as abusive or problematic, whereas our experts identified one in every 14 tweets as abusive or problematic.” That overcorrection points out the risks of censorship inherent in even the most state-of-the-art automated moderation.

“Amnesty International and Element AI’s experience using machine learning to detect online abuse against women highlights the risks of leaving it to algorithms to determine what constitutes abuse,” the report concludes. Though automation plays a role, Amnesty recommends that platforms like Twitter must use it in combination with human review, and stresses the importance of transparency.

"We remain committed to expanding our transparency reporting to better inform people about the actions we take under the Twitter rules," Gadde wrote in her response, dated December 12. "We are grateful for the feedback Amnesty shared on what this should include."

Twitter released its latest transparency report that day, with a new section covering enforcement of the platform's rules. But it still doesn't provide all the information Amnesty seeks, which Twitter acknowledges. "While we are not able to provide some granular breakdowns because Twitter does not collect the data from account holders," Gadde said to Amnesty, "we hope to continue to evolve the data we share to better inform the wider public debate."

For now, Amnesty’s crowdsourcing is the most revealing data available for a problem that so many people know about but haven't been able to quantify.

https://www.wired.com/category/security/feed/