A new agreement between 10 countries plus the European Union,
reached today (21st May) at the AI Seoul Summit, has committed
nations to work together to launch an international network to
accelerate the advancement of the science of AI safety.
The “Seoul Statement of Intent toward International Cooperation
on AI Safety Science” will bring together the publicly backed
institutions, similar to the UK's AI Safety Institute, that have
been created since the UK launched the world's first at the
inaugural AI Safety Summit – including those in the US, Japan and
Singapore.
Coming together, the network will build “complementarity and
interoperability” between their technical work and approach to AI
safety, to promote the safe, secure and trustworthy development
of AI.
This will include sharing information about models, their
limitations, capabilities and risks, as well as monitoring
specific “AI harms and safety incidents” where they occur and
sharing resources to advance global understanding of the science
around AI safety.
This was agreed at the leaders' session of the AI Seoul Summit,
bringing together world leaders and leading AI companies to
discuss AI safety, innovation and inclusivity.
As part of the talks, leaders signed up to the wider Seoul
Declaration which cements the importance of enhanced
international cooperation to develop AI that is “human-centric,
trustworthy and responsible”, so that it can be used to solve the
world's biggest challenges, protect human rights, and bridge
global digital divides.
They recognised the importance of a risk-based approach in
governing AI to maximise the benefits and address the broad range
of risks from AI, to ensure the safe, secure, and trustworthy
design, development, deployment, and use of Al.
Prime Minister, , said:
“AI is a hugely exciting technology – and the UK has led global
efforts to deal with its potential, hosting the world's first AI
Safety Summit last year.
“But to get the upside we must ensure it's safe. That's why I'm
delighted we have got agreement today for a network of AI Safety
Institutes.
“Six months ago at Bletchley we launched the UK's AI Safety
Institute. The first of its kind. Numerous countries followed
suit and now with this news of a network we can continue to make
international progress on AI safety.”
Technology Secretary said:
“AI presents immense opportunities to transform our economy and
solve our greatest challenges - but I have always been clear that
this full potential can only be unlocked if we are able to grip
the risks posed by this rapidly evolving, complex
technology.
“Ever since we convened the world at Bletchley last year, the UK
has spearheaded the global movement on AI safety and when I
announced the world's first AI Safety Institute, other nations
followed this call to arms by establishing their own.
“Capitalising on this leadership, collaboration with our overseas
counterparts through a global network will be fundamental to
making sure innovation in AI can continue with safety, security
and trust at its core.”
Deepening partnerships with AI safety institutes and similar
organisations is an area of work the UK has already kickstarted
through a landmark agreement with the United
States earlier this year. The UK's AI Safety Institute is the
world's first publicly-backed organisation, with £100m of initial
funding. Since it was created, a number of other countries have
launched their own AI Safety Institutes, including the US, Japan
and Singapore, all of which have signed the commitments announced
today.
Building on November's Bletchley Declaration, the newly agreed
statement recognises safety, innovation and inclusivity and
interrelated goals, and advocates for socio-cultural and
linguistic diversity being embraced in AI models.
These follow the freshly announced “Frontier AI Safety
Commitments” from 16 AI technology companies, setting out that
the leading AI developers will take input from governments and AI
Safety Institutes in setting thresholds when they would consider
risks unmanageable. In a world first, the commitments have been
signed by AI companies from around the world including the US,
China, Middle East and Europe.
Notes to editors
- The Seoul Declaration and the Seoul Statement of Intent on AI
Safety Science can be found in full here.
- It has been signed by Australia, Canada, the European Union,
France, Germany, Italy, Japan, the Republic of Korea, the
Republic of Singapore, the United States of America and the
United Kingdom.
- On 21 and 22 May, the United Kingdom and the Republic of
Korea will host the AI Seoul Summit. It will bring together
international governments and select global industry, academia
and civil society leaders for discussions across two days.
- It builds on the inaugural AI Safety Summit hosted by the
United Kingdom at Bletchley Park in November last year and will
be one the largest ever gathering of nations, companies and civil
society on AI.
- On day one, President Yoon Suk Yeol of the Republic of Korea
and Prime Minister co-chaired a virtual session
for world leaders on innovation and inclusivity, as well as
commitments made at Bletchley.
- On day two, Minister of Science and ICT, H.E. Lee Jong Ho of
the Republic of Korea, and the Secretary of State for Science,
Innovation and Technology, , will co-chair a
ministers' session with representatives from countries, the
European Union and the UN, alongside key figures from industry,
academia and civil society looking at AI safety, sustainability
and resilience.