Menu

Gus Mendem Gus Mendem Author
Title: Islamophobia endurance in Social Media
Author: Gus Mendem
Rating 5 of 5 Des:
TWIST Islamophobia endurance in Social Media By Fundación de Cultura Islámica · On 20 May, 2019 Facebook , YouTube , and Amazon moved to r...

Islamophobia endurance in Social Media

Facebook, YouTube, and Amazon moved to remove or reduce the spread of anti-vaccination content after recent public outcry. The platforms largely eradicated ISIS terrorists and made inroads to remove white supremacists from their services, and worked to keep them off. But through all this, anti-Muslim content has been allowed to fester across social media.

For years, Muslims endured racial slurs, dehumanizing photos, threats of violence, and targeted harassment campaigns, which continue to spread and generate significant engagement on social media platforms even though it’s prohibited by most terms of service. This is happening amid increasing violence against Muslims in the US and attacks on places of worship worldwide, including last week’s murder of 50 people at two mosques in New Zealand by a man police say was steeped in white supremacist internet meme culture.

Researchers say Facebook is the primary mainstream platform where extremists organize and anti-Muslim content is deliberately spread.

Maarten Schenk, editor of the fact-checking site Lead Stories and the developer of Trendolizer, a tool that can be used to track the virality of fake news, recently wrote about a network of 70 Macedonian websites publishing disinformation for profit. Of the top 10 stories on the websites, eight had the word “Muslim” in the title, Schenk said.

“Most of these stories are old or sensationalized or even completely not true. Yet they keep reappearing over and over again,” he said. “There clearly is a big ‘demand’ for such articles if you see how many people are willing to like and share them.”

The trend has been going on for years. In 2017, BuzzFeed News reported on the website True Trumpers that used false anti-Muslim headlines to generate engagement on Facebook and, in turn, financial profit.


Islamophobia as a political tool
Politicians have also used anti-Muslim rhetoric to bolster their popularity among voters, which then takes off on social media.

Politicians have also used anti-Muslim rhetoric to bolster their popularity among voters, which then takes off on social media.

In April 2018, a BuzzFeed News analysis found that Republican officials routinely spread anti-Muslim sentiments to their constituents across 49 states. People who dislike Muslims often belong to other extremist communities and online anti-Muslim propaganda has made its way from Europe to President Trump’s Twitter feed. Hoaxes about Muslims often live on even after being debunked. In 2016, conservative commentator Allen West’s popular Facebook page shared a meme stating that Trump’s former defense secretary, James Mattis, was chosen for the job in order to “exterminate” Muslims.

Researchers of extremism say the horrifying attack in New Zealand should be the catalyzing moment that makes platforms like Facebook and others put more focus on removing anti-Muslim hate speech from their platforms. But they aren’t optimistic about it happening.

“Islamophobia happens to be something that made these companies lots and lots of money,” said Whitney Phillips, an assistant professor at Syracuse University whose research includes online harassment. She said this type of content generates engagement, which in turn keeps people on the platform and available to see ads.

A change of direction
In an emailed statement, a Facebook spokesperson said the company has been taking down content
specific to the attack — it said it had removed 1.5 million videos of the attack in the first 24 hours — but addressed questions about anti-Muslim hate speech by linking to a blog post from 2017.

“Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement,” the statement said. “We are adding each video we find to an internal database which enables us to detect and automatically remove copies of the videos when uploaded again. We urge people to report all instances to us so our systems can block the video from being shared again.”

Megan Squire is an Elon University computer science professor who has been collecting data about extremist behavior on 15 different platforms since 2016. She told BuzzFeed News that platforms typically move to take down anti-Muslim hate speech after a reporter asks Facebook about a group of pages. But larger structural issues are not addressed.

“Sometimes, their ultimate decision is a good decision, the problem is that it comes from a place of corporate ass-covering instead of a strong ideological position,” Phillips said.

This is true for anti-Muslim hate speech and other bigoted speech on social media platforms, none of which happens in isolation, Phillips said. When Infowars was de-platformed, it was companies responding to news of the day. The same is happening with anti-vaccination disinformation across Facebook, YouTube, and others.

“The trickiest aspect of this story is how good for business hate is for social media platforms,” said Phillips.

Structural problems in journalism also contribute by focusing on the shooter instead of their victims. “I think that there’s not a lot of sympathetic portrayals of individual Muslim people and so the ideas about Islamophobia get to be these abstract concepts that don’t connect to individual people,” Phillips said.

Squire said changes Facebook recently made to how groups on the platform functionprovided a way for people who spread hateful content “to hide in plain sight” and could make the problem even worse.

The Facebook algorithm, for example, recommends related groups that can point people to extremism. Even after the New Zealand attack, the company allowed groups with names like “War against Islam” and “Bikers Against Radical Islam Europe” to exist. They have memberships in the thousands.

Groups are also frequently created with fake identities or through pages, making it difficult to track their origin — and if the groups are “closed” or “secret,” only members can see inside them. That also means they’re generally poorly moderated — groups are tasked with policing themselves and there’s no way on Facebook to report an entire group, only the content within it.


“I believe that because of the changes Facebook made, that platform is one of the most safest places for them to coordinate online,” she said. “They know that by using the social media platforms they can spread their message and they figured out how to do that.”

Squire says she’s able to find anti-Muslim groups on Facebook easily, and is currently tracking about 200 of them. Some try to name themselves in such a way that plays into freedom of speech arguments, but other groups will spread anti-Muslim hate speech without fear.

“They’ll name their groups something like ‘Infidels against radical Islam,’” she said. “So they claim that they’re not against all Islam but they’re pumping out the same propaganda.”
Shireen Mitchell, the founder of Stop Online Violence Against Women, researches the impact of social media on its users. She points out that those who spread hate know how to game social media networks, so an algorithmic solution from the companies will not be enough.

“They’re using the tool as the tool was designed,” Mitchell said. “People have to be honest that bots and trolls exist. There’s too much denial. That in itself feeds the trolls.”

In her study of how the Russian Internet Research Agency used social media to target black issues during the 2016 election, she saw that the key was to find a wedge issue and capitalize on the rage. It was about hijacking the conversation. Mitchell said that strategy works because companies are more afraid of censoring voices than keeping their users safe.

“They’re putting censorship up against safety,” Mitchell said. “Safety should be priority, not censorship.”

Facebook has said it has been actively removing comments from the platform that “praise and support” the New Zealand attack, but the company said nothing of stepping up efforts to eradicate other anti-Muslim speech spread on its platform.

“They’re making choices, and those choices are not in the vast interest of marginalized people,” Mitchell said, “not in the vast interest of people being victimized.”


Dari Author

Post a Comment

Post a Comment

 
Top