Followed by its move to scrap fact-checking from its platforms, social media giant Meta has decided to loosen rules around hate speech and abuse as well. The move is being seen as a 'tactic' to take favour from the incoming Donald Trump administration in the United States.
On Tuesday, Meta CEO Mark Zuckerberg said that his company will “remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse," as he cited “recent elections” as a catalyst for this decision.
Meta has updated its rules, also known as community standards, asking users to abide by them. Meta says, "We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like weird."
Here's what the new move will change
This implies that on Facebook, Instagram and Threads, it is permitted to call gay people mentally ill. However, other slurs what Meta terms as 'harmful stereotypes historically linked to intimidation', which include words like Blackface and Holocaust denial, are still prohibited.
The Menlo Park, California-based company also removed a sentence from its “policy rationale” explaining why it bans certain hateful conduct. The now-deleted sentence said that hate speech “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.”
Critics slam the move
“The policy change is a tactic to earn favour with the incoming administration while also reducing business costs related to content moderation,” said Ben Leiner, a lecturer at the University of Virginia's Darden School of Business who studies political and technology trends.
“This decision will lead to real-world harm, not only in the United States where there has been an uptick in hate speech and disinformation on social media platforms, but also abroad where disinformation on Facebook has accelerated ethnic conflict in places like Myanmar", he added.
Notably, in 2018, Meta acknowledged that it had not done enough to prevent its platform from being used to “incite offline violence” in Myanmar, which resulted in communal hatred and violence against the country's Muslim Rohingya minority.
(With AP inputs)
Also Read | Meta dilutes content moderation policy, paves the way for X-like Community Notes model