The UK and US have united together to combat the rise of child
sexual abuse images generated by Artificial Intelligence (AI)
programmes.
The Home Secretary , alongside the US
Homeland Security Secretary Alejandro Mayorkas, has committed to
exploring further joint action to tackle the alarming rise in
despicable AI-generated images of children being sexually
exploited by paedophiles.
The two countries have issued a joint
statement pledging to work together to innovate and
explore development of new solutions to fight the spread of this
imagery, created by depraved predators, and have called on other
nations to join them.
It comes during the Home Secretary’s visit to Washington this
week, where she visited the National Center for Missing and
Exploited Children (NCMEC), the US-based child protection
organisation whose work includes reporting online child sexual
abuse cases to global law enforcement agencies.
Home Secretary said:
Child sexual abuse is a truly abhorrent crime and one of the
challenges of our age. Its proliferation online does not respect
borders and must be combatted across the globe.
That is why we are working to tackle the sickening rise of
AI-generated child sexual abuse imagery which incites paedophiles
to commit more offences and also obstructs law enforcement from
finding real victims online.
It is therefore vital we work hand-in-glove with our close
partners in the US to tackle it. I commend the National Center
for Missing and Exploited Children (NCMEC), who work tirelessly
to keep children safe around the world. Social media companies
must take responsibility and prioritise child safety on their
platforms.
Investigations by the Internet Watch Foundation have found that
AI-generated images of children being abused – including babies
and toddlers – are growing, with some depicting the worst kind of
offending under UK and US law. The organisation has also
uncovered an online ‘manual’ dedicated to helping offenders
refine their prompts and train AI to return more and more
realistic results.
The rise is concerning, with law enforcement agencies and
charities convinced an increase in child sexual abuse material
will fuel a normalisation of offending and lead to more children
being targeted.
The surge in AI-generated images could also slow law enforcement
agencies from tracking down and identifying victims of child
sexual abuse, and detecting offenders and bringing them to
justice.
In addition, some AI technologies provide offenders with the
capability to create new pictures from benign imagery. For
example; through a process known as inpainting, offenders
can remove articles of clothing completely or swap someone’s face
into indecent images of real children.
The Home Secretary’s visit comes a week after launching a
campaign calling on Meta not to roll out end-to-end encryption on
its platforms without robust safety measures that ensure children
are protected from sexual abuse and exploitation in messaging
channels.
Currently, 800 predators a month are arrested by UK law
enforcement agencies and up to 1,200 children are safeguarded
from child sexual abuse following information provided by social
media companies. If Meta proceeds with its plans, it will no
longer be able to detect child abuse on their platforms. The
National Crime Agency (NCA) estimates 92% of Facebook Messenger
and 85% of Instagram Direct referrals could be lost – meaning
thousands of criminals a year could go undetected.
The partnership with the US also follows the Online Safety Bill’s
passage through Parliament last week.
AI-generated child sexual exploitation and abuse content is
illegal, regardless of whether it depicts a real child or
not. Under the government’s landmark bill, tech companies
will be required to proactively identify content and remove it.
The bill is deliberately tech-neutral, to ensure it keeps pace
with emerging technologies like AI, and services will
have a duty to stop the spread of illegal content such as child
sexual abuse, terrorist material and fraud. Ofcom will have the
power to direct companies to either use, or make best efforts to
develop or source, technology to identify and remove CSEA
content.
Rapid work is also underway across government to deepen our
understanding of risks and develop solutions posed by AI,
including creating the AI Taskforce and hosting the first global
AI Safety Summit this autumn.
Whilst foundation model AIs undoubtedly hold vast potential and
are crucial to the UK’s mission to become a science and tech
superpower, and a leader in safe and responsible AI, there are
still many unknowns with this class of technology, which pose
significant but not yet fully understood public safety and
national security risks.
The UK looks forward to open dialogue and deepened collaboration
with tech company leaders, industry experts and like-minded
nations, as we seek to ensure that the gifts of this technology
are delivered and society is protected.