Meta’s decision to end partnerships with fact-checking agencies and use a collaborative moderation model ended up generating many discussions, especially among experts.
Inspired by the X Community Notes program (formerly Twitter), the new approach passes the task of indicating dubious information to users. Critics fear this change will facilitate the spread of misinformation, increasing risks of hate speech and violence in the real world.
Also read:
Zuckerberg talks about fighting censorship, irritates Lula’s government and puts an end to fact checking; understand
What is the difference between fact checking and community notes on digital platforms?
How the fact-checking program worked
Since 2016, Meta has collaborated with independent agencies to verify and limit the spread of false information on its platforms. PolitiFact, owned by Poynter, was a key partner in this effort in the United States.
Angie Drobnic Holan, former editor-in-chief of PolitiFact and current director of the International Fact-Checking Network (IFCN), described the program as a “brake” against misinformation, which worked by placing warnings on dubious posts. “It acts as an obstacle in the path of false information”explained Holan.
This process ranged from rumors about the deaths of celebrities to alleged miraculous cures. The origin of the program shows growing concerns about the role of social media in spreading unverified rumors, such as false news about Pope Francis supporting Donald Trump in the 2016 elections.
Zuckerberg’s justification and reactions
Mark Zuckerberg defended the change, claiming that fact-checking could be biased and that the new model offers greater freedom of expression. According to Meta’s CEO, “There have been cases where we have removed content that, upon second look, did not violate our policies”.
However, for critics, this decision is seen as a strategy to please President-elect Donald Trump. Meta also announced Joel Kaplan, a Republican lobbyist, as its new global public affairs director, and added UFC CEO and close friend of Trump’s Dana White to the company’s board. Trump even stated that the changes to the Meta were probably a reaction to his threats.
Experts warn about risks
For many experts, this change represents a setback in digital security. Angie Drobnic Holan herself harshly criticized the decision. According to her, the previous program “worked well to reduce the virality of misleading content and conspiracy theories” and made it clear that “Most people don’t want to have to navigate a flood of misinformation on social media and check it all themselves. The losers here are users who want to use social media without being inundated with false information.”
Holan also criticized the video released by Zuckerberg, calling it “incredibly unfair” to fact-checkers. “These professionals followed rigorous principles and helped Meta for almost a decade”he stated.
Nina Jankowicz, CEO of American Sunlight Project, was even more blunt: “Zuck’s announcement is a complete submission to Trump and an attempt to achieve [Elon] Musk on his race to the bottom. The implications will be far-reaching.”. She warned that this change poses a significant risk, opening up space for hate speech and uncontrolled misinformation on platforms.
Imran Ahmed, founder and CEO of the Center for Countering Digital Hate, also expressed concerns: “Meta is saying that it’s up to you to identify the lies on their platforms, and that it’s not their problem if you can’t tell what’s true. This represents a huge setback for online safety, transparency and accountability.”.
Nicole Sugerman, from the organization Kairos, highlighted the impact on vulnerable communities: “By abandoning fact-checking, Meta is opening the door to hateful and unchecked misinformation about communities already targeted by attacks, such as black, brown, immigrant and trans people, which often leads to violence outside the online environment”.
Concern in the scientific community
The decision also generated a negative reaction in the scientific community. Kate Cell of the Union of Concerned Scientists said the change will allow anti-science misinformation to proliferate.
Michael Khoo of Friends of the Earth compared the Community Notes model to fossil fuel industry strategies that promote recycling as a solution to plastic waste, while deflecting real responsibility onto consumers. “Tech companies need to take responsibility for the misinformation problem their algorithms create”said Khoo.
He pointed to the rise in attacks on renewable energy projects as a direct reflection of misinformation. Khoo reinforced that, just as recycling has limited impact on reducing plastic waste, Meta’s collaborative moderation model will likely have modest results in curbing misinformation.
Reflections and the future of content moderation
Meta’s strategy reflects the path taken by X under the leadership of Elon Musk, which reduced its moderation teams and expanded the use of Community Notes. Research shows that since Musk took control of X, there has been a significant increase in hate speech and misinformation on the platform.
With social media playing a vital role in shaping opinion, the change in Meta raises critical questions about the future of content moderation and the responsibility of platforms in ensuring a safe and trustworthy digital environment.
fonte: theverge
Source: https://www.hardware.com.br/noticias/especialistas-detonam-o-fim-checagem-de-fatos-da-meta.html