Harmful content, including hate speech, has increased significantly on Meta's platforms after the company stopped third-party fact-checking in the US and relaxed its moderation policies, a study has shown, AFP reported.
The study, conducted among about 7,000 active users of Instagram, Facebook, and Threads, comes after the Palo Alto-based company abandoned fact-checking in the US in January and shifted the task of uncovering falsehoods to ordinary users under a model known as "Community Notes." (Community Notes), popularized by X.
The decision was widely seen as an attempt to appease the new administration of President Donald Trump, whose conservative supporters have long complained that fact-checking on tech platforms is a way to restrict free speech and censor right-wing content.
Meta also lifted restrictions on topics such as gender and sexual identity. The tech giant's updated community guidelines state that its platforms will allow users to accuse people of "mental illness" or "abnormality" based on their gender or sexual orientation.
"These policy changes mark a dramatic reversal of the content moderation standards the company has built over nearly a decade," said a study published by digital and human rights groups including UltraViolet, GLAAD, and All Out.
"Among the approximately 7,000 active users who participated in the study, we found clear evidence of an increase in harmful content, a decrease in freedom of expression, and an increase in self-censorship."
One in six survey participants reported being a victim of some form of gender- or sexual orientation-based violence on Meta platforms, and 66% said they had witnessed harmful content, such as material containing hate or violence.
92% of users surveyed said they were concerned about the increase in harmful content and felt "less protected from exposure to or targeted by" such material on Meta's platforms.
77% of respondents described feeling "less safe" expressing their opinions freely.
The company declined to comment on the survey.
In its latest quarterly report, published in May, Meta insisted that the January changes had had minimal impact.
"Following the changes announced in January, we reduced enforcement errors in the US by half, while the low prevalence of content violating our rules across the platform remained largely unchanged in most areas of concern during the same period," the report said.
But the groups behind the study insisted that the report did not reflect the experiences of users who had been targeted with hate and harassment.
"Social media is no longer just a place we 'go.' It's where we live, work, and play.
That's why it's more important than ever to ensure that all people can safely access these spaces and express themselves freely without fear of reprisal," said Jenna Sherman, director of the UltraViolet campaign.
"But after helping to establish a standard for online content moderation for nearly a decade, (CEO) Mark Zuckerberg has decided to take his company backward, leaving vulnerable users behind in the process.
Facebook and Instagram already had a problem with fairness. Now it's out of control," Sherman added.
The groups called on Meta to hire an independent third party to "formally analyze the changes in harmful content facilitated by the policy changes" made in January, and for the tech giant to quickly restore the content moderation standards that were in place earlier.
The International Fact-Checking Network has already warned of disastrous consequences if Meta extends its policy change on fact-checkers beyond the US to the company's programs covering more than 100 countries. |BGNES