Policy paper

Joint Statement: Tackling child sexual abuse in the age of Artificial Intelligence

Updated 6 November 2023

This was published under the 2022 to 2024 Sunak Conservative government

Child sexual abuse takes many forms. It can occur in the home, online, or in institutions and has a life-long impact on the victim. WeProtect Global Alliance’s 2023 Global Threat Assessment finds that child sexual abuse and exploitation online is escalating worldwide, in both scale and methods. As the online world is borderless, we must work as an international community to tackle this horrific crime.

Artificial Intelligence (AI) presents enormous opportunities to help tackle the threat of online child sexual abuse. It has the potential to transform and enhance the ability of industry and law enforcement to detect child sexual abuse cases. To realise this, we affirm that we must develop AI in a way that is for the common good of protecting children from sexual abuse across all nations.

Alongside these opportunities, AI also poses significant risks to our efforts to tackle the proliferation of child sexual abuse material and prevent the grooming of children. AI tools can be utilised by child sexual offenders to create child sexual abuse material, thereby leading to an epidemic in the proliferation of this material. Data from the Internet Watch Foundation found that in just one dark web forum, over a one-month period, 11,108 AI generated images had been shared, and the IWF were able to confirm 2,978 of these depicted AI generated child sexual abuse material. The increase in the creation and proliferation of AI-generated child sexual abuse material poses significant risks to fuelling the normalisation of offending behaviour and to law enforcement’s ability around the world to identify children who need safeguarding. In addition, AI can also enable grooming interactions, scripting sexual extortive interactions with children. Whilst these technologies are evolving at an exponential rate, the safety of our children cannot be an afterthought, and we must all work in collaboration to make sure these technologies have robust measures in place.

Issues in tackling child sexual abuse arising from AI are inherently international in nature, and so action to address them requires international cooperation. We resolve to work together to ensure that we utilise responsible AI for tackling the threat of child sexual abuse and commit to continue to work collaboratively to ensure the risks posed by AI to tackling child sexual abuse do not become insurmountable. We will seek to understand and, as appropriate, act on the risks arising from AI to tackling child sexual abuse through existing fora.

All actors have a role to play in ensuring the safety of children from the risks of frontier AI. We note that companies developing frontier AI capabilities have a particularly strong responsibility for ensuring the safety of these capabilities. We encourage all relevant actors to provide transparency on their plans to measure, monitor and mitigate the capabilities which may be exploited by child sexual offenders. At a country level, we will seek to build respective policies across our countries to ensure safety in light of the child sexual abuse risks.

We affirm that the safe development of AI will enable the transformative opportunities of AI to be used for good to tackle child sexual abuse and support partners in their quest to prioritise and streamline their processes.

As part of wider international cooperation, we resolve to sustain the dialogue and technical innovation around tackling child sexual abuse in the age of AI.

Co-signatories

  • UK National Crime Agency
  • UK National Police Chief’s Council (NPCC)
  • Internet Watch Foundation (IWF)
  • We Protect Global Alliance
  • Thorn
  • United Nations Interregional Crime and Justice Research Institute - Centre for Artificial Intelligence and Robotics
  • Stability AI
  • Trilateral Research

  • National Society for the Prevention of Cruelty to Children (NSPCC)
  • National Center for Missing and Exploited Children (NCMEC)
  • Canadian Centre for Child Protection (CCCP)
  • Lucy Faithfull Foundation
  • Child Rescue Coalition
  • SafeToNet
  • International Justice Mission
  • Safe Online
  • Childlight
  • Suojellaan Lapsia, Protect Children
  • South West Grid for Learning (SWGfL)
  • Oxford Internet Institute
  • Common Sense Media
  • Large-scale Artificial Intelligence Open Network (LAION)
  • Ontocord.AI
  • SnapChat
  • TikTok
  • OnlyFans

  • US Department of Justice
  • Australian Government
  • German Government
  • Italian law enforcement
  • Korean National Police Agency

  • Sacha Babuta, on behalf of the Centre for Emerging Technology and Security at the Alan Turing Institute
  • Professor Abhilash Nair, University of Exeter Law School