Fact-checking: what is it? how does it work?

Following Meta's decision to change its content moderation policies, a reflection on the ongoing research to combat online disinformation.

Marinella Petrocchi | researcher in cyber security, Institute of Informatics and Telematics, CNR, guest scholar, Scuola IMT Alti Studi Lucca

On 7 January 2025, Meta announced the end of the fact checking for content moderation on Facebook, Instagram and Threads. The programme is based on collaboration with independent verification organisations (certified by the International Fact Checking Network, IFCN), in charge of analysing content published on Meta platforms, with the aim of curbing the spread of disinformation: if a post is assessed as false or misleading, the visibility of the content is reduced and users receive a warning alerting them.

In a video posted on his official channel, Mark Zuckerberg stated that, in the future, the company would be inspired by the model of the "Community Notes" of X (formerly Twitter), first in the US and then on a global scale. According to Zuckerberg, the intention is to 'go back to the roots' and focus on simplifying policies, reducing errors and restoring freedom of expression. During the announcement, Zuckerberg referred to the new presidential term for Donald Trump, emphasising the need to prioritise freedom of speech.

Until now, Meta used independent fact-checkers to confirm the accuracy of news by consulting original sources and public data. The new strategy is to replace fact-checkers with a system similar to X's collective notes, based on the crowdsourcingi.e. an approach to content production in which a large group of people are delegated to intervene to detect misleading content.

But first of all: what is the fact checking? In Italian, 'fact-checking' describes a set of procedures and methodologies aimed at checking the accuracy and truthfulness of information disseminated through media, both traditional and digital. Although source and news verification has a long history in journalism, fact-checking as a structured activity has become increasingly important in the age of the internet and social media, where large amounts of content of various kinds circulate rapidly.

In today's information ecosystem, the sheer volume of news shared online makes it difficult to distinguish reliable sources from personal opinions or distorted, if not outright false information (so-called fake news). This complexity is further accentuated by the role of recommendation algorithms, which select and highlight content based on users' interests and interactions. In this context, fact-checking becomes a relevant method for ensuring the quality of information and countering the spread of misleading news. 

Who does the checks

A central aspect of fact-checking is ensuring that the content disseminated (especially on sensitive topics such as politics, health, science and economics) is correct. Organisations such as the Poynter Institute's International Fact-Checking Network (IFCN) set professional standards and guidelines for fact-checkers. Among the procedures considered important is the transparency of sources, which helps to maintain a high level of public trust. Another benefit is the timely denial of false news, accompanied by verifiable evidence, in an attempt to break the cycle of uncontrolled sharing typical of social media.

Platforms such as Snopesone of the oldest and best known, particularly active in the Anglo-Saxon market, or PolitiFactmostly devoted to statements and news in the political arena in the United States, or FactCheck.orgwhich specialises in monitoring the accuracy of news in the political sphere, operates with precisely this in mind, identifying and analysing potentially misleading content.

Also newspapers, Research organisations and government agencies often resort to fact-checking, as it is considered a means to strengthen their authority by proving that they base their claims on factual data. The Washington Post has its own team called "Fact Checker, famous for its 'Pinocchio scale'with which it assigns various levels of 'lying' to content, from the omission of certain relevant information to the outright subversion of facts. 

Even the BBC and Reuters have services and staff dedicated to analysing news and debunking hoaxes and unfounded rumours. In Italy, Pagella Politica specialises in verifying the statements of Italian politicians, while EU vs Disinfo is a task force of the European External Action Service (EEAS) aimed at countering disinformation and propaganda, in particular of foreign origin.  

Results?

Several academic studies indicate that fact-checking can improve understanding of facts and decrease the circulation of misinformation, although effectiveness varies depending on factors such as individual predisposition, socio-political context and readiness to correct. 

A research published in Political Behaviour several years ago, in 2010, shows that fact-checking can correct mistaken beliefs in part of the public. However, the same study also points to the 'backfire effect', a phenomenon whereby denial is likely to reinforce erroneous beliefs in highly polarised contexts, although the effect is less frequent than expected.

Even integrating fact-checking with media literacy appears to enhance users' ability to identify biased news. Those who regularly draw on verified content adopt, on average, a more critical approach. Other studies show that it is also the speed with which news spreads: if the correction comes late, the erroneous content may have already reached a wide audience, influencing their opinions. It has to be said that the amount of posts, articles and videos produced daily makes the timely verification of each piece of information, especially by manual processes, extremely complex. Moreover, technological innovations, such as advanced artificial intelligence algorithms (including large language models, LLMs), make it easier to create misleading content, thus requiring a parallel evolution of verification tools.

Finally, which is not easy, it is important to assess how much and what information, acquired through fact-checking, is actually usable by end-users. ANewsguard's recent surveya company specialising in assessing the reliability of online information sources, states that 'on Russian, Chinese and Iranian disinformation, Meta's fact-checking programme offered a solution in 14 per cent of cases'. In practice, only 14 per cent of the posts promoting the 30 Russian, Chinese and Iranian disinformation narratives identified by NewsGuard itself were labelled as false or misleading on Meta platforms. 

The wisdom of the crowd

The question is: does removing fact-checking functionality from social platforms, relying solely on the 'wisdom of the crowd' (i.e. bottom-up correction mechanisms), such as X's Community Notes, work better? Moving from independent fact-checkers to wisdom of the crowd certainly involves some critical issues.

First of all, verification professionals have the knowledge and resources to find primary sources and analyse data in depth. A approach based solely on user involvement may not guarantee such capabilities, especially in complex areas. Furthermore, online communities can be influenced by bot, trolls and organised groups, which steer 'consensus' towards unverified narratives. In practice, users with similar ideological positions may find themselves in echo chambers, where distorted information is confirmed by the majority. And finally, there is the question of timeliness. As for the fact checkingalso the wisdom of the crowd relies on users taking the time to report or correct erroneous content. In the meantime, misleading content can spread quickly, especially if it appeals to emotional aspects.

To sum up, participatory mechanisms such as Community Notes can certainly favour pluralism and rapidity of response - especially when false content is reported in a timely manner - but the absence of a professional filter and possible attempts at manipulation can undermine their overall reliability. A balanced strategy to effectively counter misinformation requires the integration of community input and the expertise of fact-checkers.

Simplifying work

The research group of which I am a member, a long-standing collaboration between the IMT School and the Institute of Informatics and Telematics (IIT) of the CNR in Pisa, aims to identify strategies to reduce the time and costs involved in establishing the accuracy and veracity of online information by focusing directly on the source. Specialised organisations, such as the aforementioned NewsGuard, do indeed provide valuable assessments of the reliability of digital news publishers. However, while offering useful insights, analysing criteria such as the presence of biased or propagandistic content requires considerable resources and time. As a result, many online publishers are not evaluated, creating a gap in coverage. The research conducted by IMT and CNR-IIT aims precisely at automating the process of assessing the trustworthiness of news sites. 

Our research has produced several significant results, such as the ability to automatically classify the level of reliability of a journalistic source through theanalysis of his texts or by studying the social interactions of users sharing articles from that newspaper. We have also developed TROPICa prototype that enables experienced journalists to simplify their investigative work. A ongoing project aims to assess when and how a large language model (LLM) can effectively replace the human being in the assessment of source reliability.

Assessing the veracity of a news story can be approached from different perspectives, such as fact-checking, collecting community notes - along the lines of initiatives like X - or analysing the credibility of the source. Each methodology has specific advantages and limitations, and we believe the key lies in the synergy between these approaches. Integrating these methods, with a focus on process automation, can help reduce the time-consuming and high costs of manual analyses, making the system more efficient and scalable.

You might also be interested in

SocietyTechnology and Innovation

Artificial intelligence, human errors

When AI gets it wrong: why it happens, and how to avoid it.

Mind and Brain

Technostress, when the use of technology at work becomes a problem

A group at the IMT School studies this form of psychophysical discomfort, and designs interventions to prevent and reduce it.

Society

How to fight fake news

A study compares different approaches, including economic incentives, to 'vaccinate' against misinformation.

Society

The majority opinion against fake news

Social media users 'listen' to the evaluations of others, even if they say they don't.