- First five objectives of the AI Safety Summit to be
progressed shared by UK Government
- UK will work closely with global partners to make frontier
AI safe, and to
ensure nations and citizens globally can realise its benefits
-
AI Safety Summit
will bring together key countries, leading technology
organisations, academia and civil society together
The UK government has today set out its ambitions for the
AI Safety Summit
which will take place on the 1st and 2nd November at Bletchley
Park.
Secretary of State is this week launching the
start of formal engagement prior to the summit as Jonathan Black
and Matt Clifford, the Prime Minister’s Representatives for the
AI Safety Summit,
begin discussions with countries and some frontier AI organisations. This follows a
roundtable hosted by the Secretary of State with a cross-section
of civil society groups last week.
The AI Safety Summit
will bring together key countries, as well as leading technology
organisations, academia and civil society to inform rapid
national and international action at the frontier of Artificial
Intelligence (AI)
development.
The summit will focus on risks created or significantly
exacerbated by the most powerful AI systems, particularly those
associated with the potentially dangerous capabilities of these
systems. For example, this would include the proliferation of
access to information which could undermine biosecurity. The
summit will also focus on how safe AI can be used for public good
and to improve people’s lives – from lifesaving medical
technology to safer transport.
The summit will draw on a range of perspectives both prior to and
at the event itself to inform these discussions. The UK looks
forward to working closely with global partners on these issues
to make frontier AI
safe, and to ensure nations and citizens globally can realise its
benefits, now and in the future. As part of an iterative and
consultative process, the UK is now sharing the five objectives
which will be progressed. These build upon initial stakeholder
consultation and evidence-gathering and will frame the discussion
at the summit:
- a shared understanding of the risks posed by frontier
AI and the need for
action
- a forward process for international collaboration on frontier
AI safety, including
how best to support national and international frameworks
- appropriate measures which individual organisations should
take to increase frontier
AI safety
- areas for potential collaboration on AI safety research, including
evaluating model capabilities and the development of new
standards to support governance
- showcase how ensuring the safe development of AI will enable AI to be used for good globally
Accelerating AI
investment, deployment and capabilities represent enormous
opportunities for productivity and public good. The emergence of
models with increasingly general capabilities, and step changes
in accessibility and application, have created the prospect of up
to $7 trillion in growth over the next 10 years and significantly
faster drug discovery.
However, without appropriate guardrails, this technology also
poses significant risks in ways that do not respect national
boundaries. The need to address these risks, including at an
international level, is increasingly urgent.
Individual countries, international organisations, businesses,
academia and civil society are already taking forward critical
work and driving international collaboration on AI including at the UN, Organization for Economic
Co-operation and Development (OECD),
Global Partnership on Artificial Intelligence (GPAI),
Council of Europe, G7,
G20 and standard development
organisations. The summit will build on these important
initiatives by agreeing practical next steps to address risks
from frontier AI.
This will include further discussions on how to operationalise
risk-mitigation measures at frontier AI organisations, assessment of
the most important areas for international collaboration to
support safe frontier AI, and a roadmap for
longer-term action.