Home Secretary Suella Braverman has pledged to work with the US to tackle the “sickening” rise of child sexual abuse images generated by artificial intelligence (AI).
Investigations by an online safety group have found “astoundingly realistic” AI-made images of children, including babies and toddlers, being abused.
The Internet Watch Foundation has also discovered an online “manual” written by offenders to help others use AI to produce even more lifelike images, circumventing safety measures that image generators have put in place.
Some AI technologies also allow paedophiles to create new pictures from benign images by removing clothing or swapping someone’s face on to real indecent images of children, according to the Home Office.
Ms Braverman and US homeland security secretary Alejandro Mayorkas jointly committed to exploring new ways to stop the spread of AI-generated images of child sexual abuse.
The home secretary visited the National Centre for Missing and Exploited Children in Virginia during a three-day trip to the US.
She said: “Child sexual abuse is a truly abhorrent crime and one of the challenges of our age. Its proliferation online does not respect borders and must be combated across the globe.
“That is why we are working to tackle the sickening rise of AI-generated child sexual abuse imagery which incites paedophiles to commit more offences and also obstructs law enforcement from finding real victims online.
“It is therefore vital we work hand-in-glove with our close partners in the US to tackle it.
“Social media companies must take responsibility and prioritise child safety on their platforms.”
The Home Office said the rise in AI-generated abuse images is concerning, with law enforcement agencies warning it will fuel a normalisation of offending and lead to more children being targeted.
It comes after Ms Braverman backed a campaign calling on Meta to halt plans to introduce end-to-end encryption to Facebook Messenger and Instagram.
The firm already uses the tech on WhatsApp, meaning content cannot be seen by anyone outside the chat. The National Crime Agency has warned the move will “massively reduce” authorities’ ability to protect children from online abuse.
Meta has defended the plans, insisting it has “robust safety measures” to detect and prevent abuse while maintaining security.
Under the legislation, regulator Ofcom could have the power to force platforms to scan messages for abusive or dangerous content – something the platforms argue would undermine user privacy.