Policy paper

The role of AI in addressing misinformation on social media platforms

The CDEI has published a report on the role of AI in addressing misinformation on social media platforms, which details the findings from an expert forum it convened last year, with representatives from platforms, fact-checking organisations, media groups, and academia.

Documents

Details

Background

In 2020, the CDEI hosted an expert forum that brought together a range of stakeholders, including platforms, fact-checking organisations, media groups, and academics, to discuss the role of AI in addressing misinformation on social media platforms. It sought to understand:

  • The role of algorithms in addressing misinformation on platforms, including what changed during the pandemic and why, and the limits of what algorithms can do;
  • How much platforms tell us about the role of algorithms within the content moderation process, and the extent to which there should be greater transparency in this regard;
  • Views on the effectiveness of platform approaches to addressing misinformation, including where there may be room for improvement in the immediate future.

The measures to address misinformation and disinformation online are very different, including the role of AI. To avoid an overly broad discussion and add to ongoing debates, the CDEI focused its work solely on measures to address misinformation.

Key findings

  • Algorithms enable content to be moderated at a speed and scale that would not be possible for human moderators operating alone.
  • The onset of COVID-19 and resulting lockdown led to a reduction in the moderation workforce, just as the volume of misinformation was rising. Platforms responded by relying on automated content decisions to a greater extent, without significant human oversight.
  • Increased reliance on algorithms led to substantially more content being incorrectly identified as misinformation. Participants noted that algorithms still fall far short of the capabilities of human moderators in distinguishing between harmful and benign content. One reason is that misinformation is often subtle and context dependent, making it difficult for automated systems to analyse. This is particularly true for misinformation that relates to new phenomena (such as COVID-19).
  • Platforms have issued reassurances that the increased reliance on algorithms is only temporary, and that human moderators will continue to be at the core of their processes.
  • Platforms use a range of policies and approaches to addressing misinformation (including removing content, downranking content, applying fact-checking labels, increasing friction in the user experience, and promoting truthful and authoritative information). A lack of evidence may, however, be hindering our understanding of the effectiveness of the aforementioned methods.
  • While platforms have begun to disclose more information about how they deal with harmful content, for example via transparency reports, they could go further. Transparency reports often provide limited detail across important areas including content policies, content moderation processes, the role of algorithms in moderation and design choices, and the impact of content decisions.
  • Platforms emphasised the importance of having clear guidance from the government on the types of information they should be disclosing, how often and to whom. As the new online harms regulator, Ofcom is well positioned to set new benchmarks for clear and consistent transparency reporting.

Next steps

There are steps that can be taken today to help mitigate misinformation. Undertaking more research into the efficacy of moderation tools, experimenting with new moderation methods, increasing transparency of platform moderation policies, and investing more in supporting authoritative content, are all interventions worthy of investigation.

To improve the efficacy of moderation tools, the CDEI is currently working with DCMS on the Online Safety Data Initiative, which is designed to test methodologies which enable better and safe access to high quality datasets that can be used for training AI systems to identify and remove harmful and illegal content from the internet. Further updates about this project will be shared in the coming months.

Updates to this page

Published 5 August 2021

Sign up for emails or print this page