News story

Britain's leading the way protecting children from online predators

UK becomes the first country in the world to create new AI sexual abuse offences to protect children from predators generating AI images.

Children will be protected from the growing threat of predators generating AI images and from online sexual abuse as the UK becomes the first country in the world to create new AI sexual abuse offences.

AI tools are being used to generate child sexual abuse images in a number of sickening ways including by “nudeifying” real life images of children or by stitching the faces of other children onto existing child sexual abuse images. The real-life voices of children are also often used in this sickening material, meaning innocent survivors of traumatic abuse are being re-victimised.

Perpetrators are also using those fake images to blackmail children and force victims into further horrific abuse including streaming live images. AI tools are being used to help perpetrators disguise their initial identity and more effectively groom and abuse children online.

To better protect children against this sickening abuse the Home Secretary Yvette Cooper has today (2 February) revealed the UK will be the first country in the world to:

  • make it illegal to possess, create or distribute AI tools designed to generate CSAM, punishable by up to 5 years in prison
  • make it illegal for anyone to possess AI “paedophile manuals” which teach people how to use AI to sexually abuse children, punishable by up to 3 years in prison

At the same time, the Home Office will:

  • introduce a specific offence for predators who run websites designed for other paedophiles to share vile child sexual abuse content or advice on how to groom children, punishable by up to 10 years in prison
  • give Border Force the necessary powers to keep the UK safe and prevent the distribution of CSAM which is often filmed abroad by allowing officers to compel an individual who they reasonably suspect poses a sexual risk to children to unlock their digital devices for inspection. Punishable by up to 3 years in prison, depending on the severity

All 4 measures will be introduced as part of the Crime and Policing Bill when it comes to Parliament. The bill will support the delivery of the government’s safer streets mission to halve knife crime and violence against women and girls in a decade and increase confidence in policing and the wider criminal justice system to its highest levels.

The increased availability of AI CSEA imagery not only poses a real risk to the public by normalising sexual violence against children, but it can lead those who view and create it to go on to offend in real life.

Home Secretary, Yvette Cooper, said:

We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person. This government will not hesitate to act to ensure the safety of children online by ensuring our laws keep pace with the latest threats.

These 4 new laws are bold measures designed to keep our children safe online as technologies evolve. It is vital that we tackle child sexual abuse online as well as offline so we can better protect the public from new and emerging crimes as part of our plan for change.

The Internet Watch Foundation (IWF) has warned that more and more sexual abuse AI images of children are being produced.

Over a 30 day period in 2024, IWF analysts identified 3,512 AI CSAM images on a single dark web site. Compared with their 2023 analysis, the prevalence of Category A images (the most severe category) had risen by 10%. 

New data from the charity shows that reports showing AI generated CSAM have risen 380%, with 245 confirmed reports in 2024 compared with 51 in 2023. Each report can contain thousands of images.

The charity also warns that some of this AI generated content is so realistic that sometimes they are unable to tell the difference between AI generated content and abuse that is filmed in real life. Of the 245 reports the IWF took action against, 193 included AI generated images which were so sophisticated and life-like, they were actioned under UK law as though they were actual, photographic images of child sexual abuse.

The predators who run or moderate websites designed for other paedophiles to share vile child sexual abuse content or advice on how to groom children are often the most dangerous to society by encouraging others to view even more extreme content.

Covert law enforcement officials warn that these individuals often acting as ‘mentors’ for others with an interest in harming in children by offering advice on how to avoid detection and how to manipulate AI tools to generate CSAM.

Technology Secretary, Peter Kyle said:

For too long abusers have hidden behind their screens, manipulating technology to commit vile crimes and the law has failed to keep up. It’s meant too many children, young people, and their families have been suffering the dire and lasting impacts of this abuse.

That is why we are cracking down with some of the most far-reaching laws anywhere in the world. These laws will close loopholes, imprison more abusers, and put a stop to the trafficking of this abhorrent material from abroad. Our message is clear – nothing will get in the way from keeping children safe, and to abusers, the time for cowering behind a keyboard is over.

Through the new laws, The Home Office is leading on the international stage by continuing to invest in law enforcement capabilities to target online child sexual abuse offenders to disrupt the highest harm and most technically sophisticated offenders.

Which is why we are giving Border Force the necessary powers to keep the UK safe and prevent the distribution of CSAM which is often filmed abroad. Border Force officers will have the power to compel an individual, where they reasonably suspect that the individual poses a sexual risk to children, to unlock their digital devices for inspection.

Once the device is accessed, specialist technology will be used to compare the contents of the device against the Child Abuse Image Database (CAID), to identify the presence of known child sexual abuse material.

Interim Chief Executive of the IWF, Derek Ray-Hill, said:

We have long been calling for the law to be tightened up, and are pleased the government has adopted our recommendations. These steps will have a concrete impact on online safety.

The frightening speed with which AI imagery has become indistinguishable from photographic abuse has shown the need for legislation to keep pace with new technologies.

Children who have suffered sexual abuse in the past are now being made victims all over again, with images of their abuse being commodified to train AI models. It is a nightmare scenario, and any child can now be made a victim, with life-like images of them being sexually abused obtainable with only a few prompts, and a few clicks.

The availability of this AI content further fuels sexual violence against children. It emboldens and encourages abusers, and it makes real children less safe. There is certainly more to be done to prevent AI technology from being exploited, but we welcome today’s announcement, and believe these measures are a vital starting point.

While AI can be used as a force for good to transform people’s lives, make public services more efficient and help bolster creative industries, the risk of its use to children continues to grow.

The crime risks normalising sexual violence against children and re-victimising survivors of traumatic abuse. Which is why this government is prepared to build upon the Online Safety Act and will not hesitate to go further if necessary.

Minister for Safeguarding and Violence Against Women and Girls, Jess Phillips, said: 

As technology evolves so does the risk to the most vulnerable in society, especially children. It is vital that our laws are robust enough to protect children from these changes online. We will not allow gaps and loopholes in legislation to facilitate this abhorrent abuse.

However, everyone has a role to play, and I would implore Big Tech to take seriously its responsibility to protect children and not provide safe spaces for this offending.

Crossbench Peer and Chair of 5Rights Foundation, Baroness Kidron said:

It has been a long fight to get the AI Child Sexual Abuse Offences into law, and the Home Secretary’s announcement today that they will be included in the Crime Bill, is a milestone. AI-enabled crime normalises the abuse of children and amplifies its spread. Our laws must reflect the reality of children’s experience, and ensure that technology is safe by design and default.

I pay tribute to my friends and colleagues in the specialist police unit that brought this to my attention, and commend them for their extraordinary efforts to keep children safe. All children whose identity has been stolen or who have suffered abuse deserve our relentless attention and unwavering support. It is they -  and not politicians - who are the focus of our efforts

In January, the Home Secretary announced a raft of new measures and an investment of £10 million that will allow us to do more to protect vulnerable children, find more criminals, and get justice for more victims and survivors of child sexual abuse.

More victims of child sexual abuse and exploitation will be given power to seek an independent review of their cases following the widening of the Child Sexual Abuse Review Panel. Chief constables of all police forces in England and Wales have been urged to re-examine non-recent and live cases of gang exploitation to increase prosecutions.

At the same time, Baroness Louise Casey has been appointed to lead a rapid audit of existing evidence on grooming gangs to help deliver quicker action to tackle the crime and help victims. By Easter, the government will lay out a clear timetable for taking forward the recommendations from the final IICSA report.

Policy Manager for Child Safety Online at the NSPCC, Rani Govender said:

It is encouraging to see the government take action aimed at tackling criminals who create AI generated child sexual abuse images.

Our Childline service is hearing from children and young people about the devastating impact it can have when AI generated images are created of them and shared. And, concerningly, often victims won’t even know these images have been created in the first place.

It is vital the development of AI does not race ahead of child safety online. Wherever possible, these abhorrent harms must be prevented from happening in the first place. To achieve this, we must see robust regulation of this technology to ensure children are protected and tech companies undertake thorough risk assessments before new AI products are rolled out.

Updates to this page

Published 4 February 2025