Meta’s fact-checking partners undaunted: ‘Look inwards, find other options’
Meta’s founder and CEO Mark Zuckerberg has said that changes to the company’s long-running partnerships with fact-checking organisations will be rolled out imminently in the United States. For now, though, there is ‘no immediate plan‘ to end third-party fact-checking and introduce community notes outside the US, reports the Financial Times.
Fact-checkers around the world are, however, already preparing for what they see as the inevitable demise of Meta’s global fact-checking programme. The company currently works with hundreds of fact-checking partners in more than 115 countries.
Against this depressing backdrop, I spoke to seven fact-checking organisations from Brazil, Croatia, Italy, Nigeria, the Philippines and Ukraine to examine the potential impact of this decision on their work, their newsrooms and their audiences.
1. Will the rest of the world follow?
While Meta’s decision only applies to the US for now, experts are expecting the company will apply it more globally. None of the fact-checkers I spoke to have been contacted by Meta since the announcement, but most are confident their contracts with the company will remain intact at least until the end of this year.
Despite this, all are preparing this decision to extend beyond the US.
‘I see no reason why we should take Mark Zuckerberg at his word,’ says Ana Brakus, Executive Director of Croatian fact-checking organisation Faktograf. ‘He hasn’t shown [himself] to be exactly trustworthy, and it would be very careless to base our future business decisions on taking him on his word.’
As Dr Lucas Graves, Professor at the University of Wisconsin-Madison, pointed out in a recent piece for The Guardian, the DSA’s ‘regulatory framework is unfinished and untested’, with the first formal charges brought under the DSA against Elon Musk’s X as yet unresolved.
My sources in the European fact-checking community are not optimistic about the EU enforcing the DSA. So far, the EU’s only public comment on the matter is that any platform that wishes to remove [fact-checking] policies ‘will have to conduct a risk assessment and send it to the EU Commission‘.
‘We hope the EU will be strong in facing these threats, but I don’t have any particular optimism on this point,’ says Tommaso Canetta, Deputy Director of Italian fact-checking organisation Facta News.
‘As a fact-checker and journalist, I don’t think I can tell the EU institutions what they should do specifically, but I think that they should defend the rules they have created and implement them.
‘If Meta proves that it can effectively counter disinformation even without the fact-checking programme, no fines should be issued. But if they can’t, EU rules should be enforced.
‘Grovelling to foreign pressure and disowning its own rules would be very dangerous, and morally wrong.’
Brakus is not sure Meta has any incentive to follow EU regulation, and questions the capacity of the European Commission to enforce its own laws, especially against a tech giant. She mentions the precedent set by Musk, who quickly deconstructed X’s content moderation system with little pushback.
‘The Digital Service Act has a chance to work, but it’s too soon to tell. Even when the EU approves good regulation, it’s not always great with implementing it and notoriously slow.’
2. How will this impact fact-checkers’ finances?
Back in 2022, Meta bragged about building ‘the largest global fact-checking network of any platform’ and having ‘contributed more than $100-million to programmes supporting our fact-checking efforts since 2016’.
Many organisations around the world rely on Meta’s funding and may now be left in the lurch after their decision. Yet none of the outlets I spoke to said that losing funding from Meta will force them to fold. Most even said they will be able to secure other sources of funding to make up for any losses.
Several outlets refrained from giving me figures on how much of their funding comes from Meta, citing the Non-disclosure Agreement they had to sign with the company. The ones that did share some information said it is significantly less than 50% of their budget, hovering between 20% to 30%.
Several fact-checkers mentioned they’ve already diversified their sources of revenue so they don’t rely too much on a single programme.
Yevhen Fedchenko is the Co-founder and Chief Editor of the Ukrainian fact-checking organisation, StopFake.org. When the invasion of Ukraine started in 2022, Meta was the first organisation to reach out to StopFake.org and ask what it could do to help. Fedchenko now hopes Meta sees the value of their partnership in safeguarding Ukrainians against Russian disinformation in the midst of war.
‘Financial threats have a different meaning for us because we have gone through any kind of threats during the war,’ he says. ‘[Meta’s money] helped us to adjust to those changing circumstances, but we could still survive [without them].
‘That’s what Ukrainian audiences expect from us. I can’t imagine how difficult it’d be for Ukrainian society to survive without access to our fact-checks.’
Fedchenko points at some recent debunking they had to do regarding lies that discredit Ukraine and Ukrainian leadership in the eyes of Western audiences which can potentially impact the provision of international aid for the country.
He also mentioned their work debunking lies discrediting Ukrainian refugees, which can create potential conflicts with receiving communities.
This is roughly the outlook of the fact-checking organisations that I spoke to: they’ll need to adjust their budgets to avoid lay-offs and cutting programmes, but the biggest loss will be faced by their countries’ information ecosystem.
Most outlets expect they will have at least a year to prepare for any potential changes as their contracts have been renewed for 2025.
Kemi Busari is the Editor of Nigerian fact-checking organisation Dubawa, which has been partnering with Meta since 2019. If the decision from Meta came to them as abruptly as it did in the US, he says, they would be forced to reduce their staff, which in turn would mean that their capacity to fact-check claims would be diminished.
But if they have a year to prepare, Busari thinks they will be able to find other revenue streams. ‘We are also viewing this as a challenge to look inwards and think about other options.’
‘We have this understanding that fact-checking is not a business and should never be a business. It’s a social enterprise. With that kind of mindset, we should be able to find some other ways to continue to do our work’
Tai Nalon is the Executive Director of Aos Fatos, a fact-checking organisation in Brazil that has been partnering with Meta since 2018. In addition to grants, they have diversified their financing through the licensing of journalistic content, a membership programme and the sale of technology and intelligence services.
Nalon said Meta’s support has been essential for their journalistic work.
‘Our partnership with Meta was crucial for establishing Aos Fatos as a leading journalistic organisation in Brazil and across the continent. For a long time, Meta shared tools for monitoring trends that supported our journalistic investigations, such as the public Crowdtangle API.’
‘Monitoring the attacks in Brasilia on 8 January 2023 would not have been possible without a robust strategy to combat misinformation through fact-checking and investigations.’
Nátalia Leal, CEO of Brazilian fact-checking organisation Agência Lupa, says their partnership with Meta has allowed them too to grow as a company, and to expand their audiences by reaching users they were not able to reach before. Their sources of revenue range from selling their content to other news outlets to offering workshops and training.
‘We will need more people supporting our work,’ Leal says. ‘It’s not just the money. It’s the perception of the importance of journalism and fact-checking.’
3. How will this impact the information ecosystem?
Fact-checkers say the most important impact of Meta’s decision will be felt in the information ecosystem, especially in countries in the Global South.
Facebook and Instagram are still major sources of news in many of those, so the removal of fact-checking from news feeds could cause an increase in the amount of misleading information users see.
One of the countries [which could be impacted] is the Philippines, where 61% get their news from Facebook, according to our Digital News Report 2024.
Celine Samson is the Head of the online verification team at VERA Files, a Filipino fact-checking organisation which has been partnering with Meta since 2018.
‘Facebook is still king here,’ she says. ‘Despite the rise of other platforms, Facebook continues to be the most used social media platform – it’s where local Filipinos and our huge diaspora get their news. If the programme gets removed, we are worried about the quality of the information they would be getting.’
Fedchenko says Facebook has become an informational lifeline to many Ukrainians during the war, but Russia has also used this platform to spread war propaganda, which is one of the reasons why fact-checking is so crucial.
‘People are using social networks here as a platform for life-saving communication,’ he says. ‘It’s a place where people share important information, and our ability to verify that information is also crucial for people.’
Dubawa in Nigeria did research on the different types of misinformation circulating across social media platforms in the lead-up to that country’s 2023 general election. It sourced data from published fact-check reports from three African fact-checking outlets, finding that Facebook was the platform where falsehoods were the most prevalent.
Busari says Meta shutting down the programme would be a blow to democracies on the continent and around the world. ‘If you are not equipping fact-checkers to combat that kind of disinformation, then it is a very huge threat.’
Nalon from Brazil’s Aos Fatos stresses that this decision has been followed by a relaxation of the rules regarding hate speech. Fact-checking has often played a crucial role in showing that certain types of misinformation were conspiracy theories promoted by hateful groups.
‘[Without the programme] it will become more difficult to distinguish high-quality, professionally-verified information from other types of content on social media. Trust will be weakened. Lax rules will likely turn the network into a sort of hub for scams.
‘It is what we’ve seen on X, which is now regarded by Zuckerberg as an example.’
4. Will Meta’s new ‘community notes’ work?
Meta’s plan is simply to replace its current system of verification with X-style ‘community notes’, where users themselves submit context, clarification or fact-checks to posts.
Its plans are still nebulous, but Meta said that community notes will be written and rated by contributing users and ‘will require agreement between people with a range of perspectives to help prevent biased ratings’.
Samson from VERA Files is concerned about these changes. She wonders who these users will be, how Meta will decide on these users, what the company means by ‘a diversity of perspectives’ and what kinds of facts will be put on these notes. Another concern is how Meta will apply these rules across different cultural and political contexts in light of the kind of disastrous mistakes it made in the past.
‘I am not totally discounting [community notes], but I am very sceptical of it, just because of the era we’re living in now, where people don’t even agree on what facts are.
‘In our experience, when you put out a fact-check, some people would just not believe it, even if you supply them with so many sources.’
Meta will also get rid of restrictions on how issues like immigration and gender identity are allowed to be discussed. The company has already implemented some of these changes with an update to its Hateful Conduct Policy.
For example, its new policies allow users to ‘make allegations of mental illness or abnormality when based on gender or sexual orientation’. It has also deleted warnings against self-admission of racism, homophobia and Islamophobia.
While this goes beyond the scope of what fact-checkers do, Leal from Brazil’s Lupa thinks these changes and the new community notes system will result in the amplification of Meta’s most radical users.
‘Social media will become more polarised. The system of community notes is based on engagement, and the most engaged users are often the most polarised. If they are the ones responsible for community notes, these notes will not be impartial and there will probably be more rage and more hate.’
Will requiring agreement between people with a range of perspectives help prevent biased ratings? Leal does not think so.
‘This idea of “different sides” evaluating something as a sign of balance is nonsense to me. It is not fair for an evaluation from someone who bases it on scientific evidence having the same weight of an evaluation based on conspiracy theories, for example.
‘This is what happens in X and is very different from balance or from giving different perspectives to users,’ she said.
One major study for the US has shown that right-leaning users tend to share misinformation at a greater volume than left-leaning users, and are more often fact-checked, which could explain the perception that fact-checking is biased
But that latter assertion is not supported by research. There is some evidence though that fact-checking helps reduce the spread of misinformation.
Many fact-checkers I spoke to vehemently repudiate Zuckerberg’s assertion that they are biased or that they hinder freedom of speech.
‘We pick up false information and fact-check it by providing context or ratings in some cases,’ says Busari. ‘There is a trackable impact of how that has reduced false information on Meta. We try to provide people with accurate information. I don’t think there is any way community notes can replace that. Think about X.’
Canetta from Italy’s Facta News also uses X as an example of how a faulty content moderation system can lead to the deterioration of the information ecosystem of a platform, and how this degradation can bleed into real life.
‘The platform quickly became a haven for people spreading hate and propaganda through demonstrably false content, and this created harmful effects,’ he says. ‘Think about the riots in the UK last summer that were propelled by false news about immigration circulating unchallenged on X.’
5. What kind of effect will Zuckerberg’s message have?
Both Zuckerberg’s video message and Meta’s press release claimed fact-checkers were biased, and presented their work as censorship. But fact-checkers do not provide content moderation for Meta or decide what type of content is moderated.
They simply provide a service: fact-checking misleading posts. It’s not fact-checkers who decide what Meta does with their work.
Meta explains its own system very clearly, here:
- Its own technology detects posts that are likely to be misinformation, based on various signals.
- Fact-checkers then independently review those pieces of content and rate their accuracy.
- When content has been rated by fact-checkers, they add a note to it so that people can read additional context.
- This means that Meta itself ultimately decides on how content found to be false or misleading is labelled or down-ranked.
Moreover, for fact-checking organisations to be part of Meta’s programme, they have to be certified through non-partisan institutions: either the International Fact-Checking Network (IFCN) or the European Fact-Checking Standards Network (EFCSN).
Organisations have to go through independent assessments and follow strict standards of non-partisanship. This is part of the IFCN’s Code of Principles that all members have to adhere to, and that Meta has praised in the past.
Many of the fact-checkers I spoke to were concerned that Meta’s scapegoating would result in a rising tide of online attacks.
‘Both this decision and the things that Zuckerberg said to justify this decision are really bad,’ says Canetta. ‘Fact-checkers already face a huge amount of attacks. Calling us censors and politically biased will give more weapons to the people that already want to harass us.’
Brakus from Croatia’s Faktograf told me that as soon as Meta made its announcement, they started receiving emails with threats and attacks.
‘Our research shows that harassing fact-checkers is just part of the populist playbook and is used to create mistrust in society. When someone like Mark Zuckerberg accuses you of censorship, what do you think happens after that?
‘Anyone who’s previously attacked us will feel emboldened to do it again.’
Agência Lupa’s Leal says that they too have previously been targeted, with a peak of attacks happening during the COVID pandemic.
Brakus’ Faktograf has been tracking this kind of harassment. In their latest report, published in 2023, which tracked attacks received by fact-checking media outlets in Europe, most respondents said these had become more frequent since they joined Meta’s programme. (The report was conducted via a survey with 41 fact-checking outlets from 28 European countries.)
The report also outlines how fact-checkers are falsely presented as ‘censors’ who down-rank or even remove user content, which is not the case
Despite Meta’s decision, several of the organisations I spoke to still hope these policy changes do not extend abroad. Brakus thinks fact-checking is more crucial than ever now as the news ecosystem has become fragmented, as most people access news online, and levels of mistrust are unprecedentedly high.
‘Most people want good information. When they are sharing stuff, they actually think they are helping their friends and families, and they deserve good information.’
- This article, with limited edits for publication, was first published here
- Meta has been one of the funders of the Reuters Institute in the past and supported its Journalism Innovation Project and Trust in News Project. At the time this piece was published, Meta was not one of the Institute’s funders