Darren Jones (Bristol North West) (Lab) I am grateful, Mr Deputy
Speaker, that this Adjournment debate on the regulation of
artificial intelligence has been granted. I declare my interest as
set out in the Register of Members’ Financial Interests. Britain is
at a turning point. Having left the European Union, irrespective of
what people thought about that decision, we have decided to go it
alone. This new chapter in the long history of our great nation is
starting to...Request free trial
(Bristol North West) (Lab)
I am grateful, Mr Deputy Speaker, that this Adjournment debate on
the regulation of artificial intelligence has been granted. I
declare my interest as set out in the Register of Members’
Financial Interests.
Britain is at a turning point. Having left the European Union,
irrespective of what people thought about that decision, we have
decided to go it alone. This new chapter in the long history of
our great nation is starting to unfold, and we have a number of
possible destinations ahead. We stand here today as a country
with great challenges and an identity crisis: what is modern
Britain to become? Our economy is, at best, sluggish; at worst,
it is in decline. Our public services are unaffordable,
inefficient and not delivering the quality of service the public
should expect. People see and feel those issues right across the
country: in their pay packets, in the unfilled vacancies at work,
and in their local schools, GP surgeries, dentists, hospitals and
high streets. All of this is taking place in a quickly changing
world in which Britain is losing influence and control, and for
hostile actors who wish Britain—or the west more broadly—harm,
those ruptures in the social contract present an opportunity to
exploit.
Having left the European Union, I see two destinations ahead of
us: we can either keep doing what we are doing, or modernise our
country. If we take the route to continuity, in my view we will
continue to decline. There will be fewer people in work, earning
less than they should be and paying less tax as a consequence.
There will be fewer businesses investing, meaning lower profits
and, again, lower taxes. Income will decline for the Treasury,
but with no desire to increase the national debt for day-to-day
spending, that will force us to take some very difficult
decisions. It will be a world in which Britain is shaped by the
world, instead of our shaping it in our interests.
Alternatively, we can decide to take the route to modernity,
where workers co-create technology solutions at work to help them
be more productive, with higher pay as a consequence; where
businesses invest in automation and innovation, driving profits
and tax payments to the Treasury; where the Government take
seriously the need for reform and modernisation of the public
sector, using technology to individualise and improve public
services while reducing the cost of those services; and where we
equip workers and public servants with the skills and training to
seize the opportunities of that new economy. It will be a modern,
innovative Britain with a modern, highly effective public sector,
providing leadership in the world by leveraging our strengths and
our ability to convene and influence our partners.
I paint those two pictures—those two destinations: continuity or
modernity—for a reason. The former, the route to continuity,
fails to seize the opportunities that technological reforms
present us with, but the latter, the route to modernity, is built
on the foundations of that new technological revolution.
This debate this evening is about artificial intelligence. To be
clear, that is computers and servers, not robots. Artificial
intelligence means, according to Google,
“computers and machines that can reason, learn, and act in such a
way that would normally require human intelligence or that
involves data whose scale exceeds what humans can analyse.”
These AI machines can be categorised in four different ways.
First, reactive machines have a limited application based on
pre-programmed rules. These machines do not use memory or learn
themselves. IBM’s Deep Blue machine, which beat Garry Kasparov at
chess in 1997, is an example. Secondly, limited memory machines
use memory to learn over time by being trained using what is
known as a neural network, which is a system of artificial
neurons based on the human brain. These AI machines are the ones
we are used to using today. Thirdly, theory of mind machines can
emulate the human mind and take decisions, recognising and
remembering emotions and reacting in social situations like a
human would. Some argue that these machines do not yet exist, but
others argue that AI such as ChatGPT, which can interact with a
human in a humanlike way, shows that we are on the cusp of a
theory of mind machine existing. Fourthly, self-aware machines
are machines that are aware of their own existence and have the
same or better capabilities than those of a human. Thankfully, as
far as I am aware, those machines do not exist today.
That all might be interesting for someone who is into tech, but
why am I putting it on the public record today? I am doing so
because there are a number of risks that we as a Parliament and
the Government must better understand, anticipate and mitigate.
These are the perils on our journey to continuity or modernity.
Basic artificial intelligence, which helps us to find things on
the internet or to book a restaurant, is not very interesting.
The risk is low. More advanced artificial intelligence, which can
perform the same tasks as a junior solicitor, a journalist or a
student who is supposed to complete their homework or exam
without the assistance of AI, presents a problem. We already see
the problems faced by workers who have technology thrust upon
them, instead of being consulted about its use. The consequences
are real today and carry medium risks—they are disruptive.
Then we have the national security or human rights-level risks,
such as live facial recognition technologies that inaccurately
identify someone as a criminal, or a large language model that
can help a terrorist understand how to build a bomb or create a
novel cyber-security risk, or systems that can generate deepfake
videos, photos or audio of politicians saying or doing things
that are not true to interfere with elections or to create fake
hostage recordings of someone’s children.
(Strangford) (DUP)
I commend the hon. Gentleman on bringing this debate forward. It
is a very deep subject for the Adjournment debate, but it is one
that I believe is important. Ethics must be accounted for to
ensure that any industries using AI are kept safe. One issue that
could become increasingly prominent is the risk of cyber-threats,
which he referred to, and hacking, which not even humans can
sometimes prevent. Does he agree that it is crucial that our
Government and our Minister undertake discussions with UNESCO,
for example, to ensure that any artificial intelligence that is
used within UK industry is assessed, so as to deal with the
unwanted harms as well as the vulnerabilities to attack to ensure
that AI actors are qualified to deal with such exposure to
cyber-attacks? In other words, the Government must be over this
issue in its entirety.
The hon. Member is of course right. In the first part of his
intervention, he alluded to the risk I have just been referring
to, where machines can automatically create, for example, novel
cyber-risks in a way that the humans who created those systems
might not fully understand and that are accessible to a wider
range of actors. That is a high risk that is either increasingly
real today or is active and available to those who wish to do us
harm.
The question, therefore, is what should we in Parliament do about
it? Of course, we want Britain to continue to be one of the best
places in the world to research and innovate, and to start up and
scale up a tech business. We should also want to transform our
public services and businesses using that technology, but we
must—absolutely must—make sure that we create the conditions for
this to be achieved in a safe, ethical and just way, and we must
reassure ourselves that we have created those conditions before
any of these high-risk outcomes take place, not in the aftermath
of a tragedy or scandal.
That is why I have been so pleased to work with UNESCO, as the
hon. Gentleman mentioned, and assistant director general Gabriela
Ramos over the past few years, on the UNESCO AI ethics framework.
This framework, the first global standard on AI ethics, was
adopted by all 193 member states of the United Nations in 2021,
including the United Kingdom. Its basis in human rights,
actionable policies, readiness assessment methodology and ethical
impact assessments provides the basis for the safe and ethical
adoption of AI across countries. I therefore ask the Minister, in
summing up, to update the House on how the Government are
implementing their commitments from the 2021 signing of the AI
ethics framework.
As crucial as the UNESCO AI ethics framework is, in my view the
speed of innovation requires two more things from Government:
first, enhanced intergovernmental co-ordination, and secondly,
innovation in how we in this House pass laws to keep up with the
speed of innovation. I will take each in turn.
First, on enhanced intergovernmental co-ordination, I wrote to
the Government at the end of April calling on Ministers to play
more of a convening role on the safe and secure testing of the
most advanced AI, primarily with Canada, the United States and—in
so far as it can be achieved—China, because those countries,
alongside our own, are where the most cutting-edge companies are
innovating in this space. I was therefore pleased to see in the
Hiroshima communiqué from last week’s G7 a commitment to
“identify potential gaps and fragmentation in global technology
governance”.
As a parliamentary lead at the OECD global parliamentary network
on AI, I also welcome the request that the OECD and the Global
Partnership on Artificial Intelligence establish the Hiroshima AI
process, specifically in respect of generative AI, by the end of
this year.
I question, however, whether these existing fora can build the
physical or digital intergovernmental facilities required for the
safe and secure testing of advanced AI that some have called for,
and whether such processes will adequately supervise or have
oversight of what is taking place in start-ups or within
multinational technology companies. I therefore ask the Minister
to address these issues and to provide further detail about the
Hiroshima AI process and Britain’s contribution to the OECD and
GPAI, which I understand has not been as good as it should have
been in recent years.
I also welcome the engagement of the United Nations’ tech envoy
on this issue and look forward to meeting him at the AI for Good
summit in Geneva in a few weeks’ time. In advance of that, if the
Minister is able to give it, I would welcome his assessment of
how the British Government and our diplomats at the UN are
engaging with the Office of the Secretary-General’s Envoy on
Technology, and perhaps of how they wish to change that in the
future.
Secondly, I want to address the domestic situation here in the UK
following the recent publication of the UK’s AI strategy. I
completely agree with the Government that we do not want to
regulate to the extent where the UK is no longer a destination of
choice for businesses to research and innovate, and to start up
and scale up their business. An innovation-led approach is the
right approach. I also agree that, where we do regulate, that
regulation must be flexible and nimble to at least try to keep up
with the pace of innovation. We only have to look at the Online
Safety Bill to learn how slow we can be in this place at
legislating, and to see that by the time we do, the world has
already moved on.
Where I disagree is that, as I understand it, Ministers have
decided that an innovation-led approach to regulation means that
no new legislation is required. Instead, existing regulators—some
with the capacity and expertise required, but most without—must
publish guidance. That approach feels incomplete to me. The
European Union has taken a risk-based approach to regulation,
which is similar to the way I described high, medium and low-risk
applications earlier. However, we have decided that no further
legislative work is required while, as I pointed out on Second
Reading of the Data Protection and Digital Information (No. 2)
Bill, deregulating in other areas with consequences for the
application of consumer and privacy law as it relates to AI.
Surely, we in this House can find a way to innovate in order to
draft legislation, ensure effective oversight and build
flexibility for regulatory enforcement in a better way than we
currently do. The current approach is not fit for purpose, and I
ask the Minister to confirm whether the agreement at Hiroshima
last week changes that position.
Lastly, I have raised my concerns with the Department and the
House before about the risk of deepfake videos, photo and audio
to our democratic processes. It is a clear and obvious risk, not
just in the UK but in the US and the European Union, which also
have elections next year. We have all seen the fake picture of
the Pope wearing a white puffer jacket, created by artificial
intelligence. It was an image that I saw so quickly whilst
scrolling on Twitter that I thought it was real until I stopped
to think about it.
Automated political campaign videos, fake images of politicians
being arrested, deepfake videos of politicians giving speeches
that never happened, and fake audio recordings are already
available. While they may not all be of perfect quality just yet,
we know how the public respond to breaking news cycles on social
media. Many of us look at the headlines or the fake images over a
split second, register that something has happened, and most of
the time assume it to be true. That could have wide-ranging
implications for the integrity of our democratic processes. I am
awaiting a letter from the Secretary of State, but I am grateful
for the response to my written parliamentary question today. I
invite the Minister to say more on that issue now, should he be
able to do so.
I am conscious that I have covered a wide range of issues, but I
hope that illustrates the many and varied questions associated
with the regulation of artificial intelligence, from the mundane
to the disruptive to the risk to national security. I welcome the
work being done by the Chair of the Science, Innovation and
Technology Committee on this issue, and I know that other
Committees are also considering looking at some of these
questions. These issues warrant active and deep consideration in
this Parliament, and Britain can provide global leadership in
that space. Only today, OpenAI, the creator of ChatGPT, called
for a new intergovernmental organisation to have oversight of
high-risk AI developments. Would it not be great if that
organisation was based in Britain?
If we get this right, we can take the path to modernity and
create a modern Britain that delivers for the British people, is
equipped for the future, and helps shape the world in our
interests. If we get it wrong, or if we pick the path to
continuity, Britain will suffer further decline and become even
less in control of its future. Mr Deputy Speaker, I pick the path
to modernity.
9.22pm
The Parliamentary Under-Secretary of State for Science,
Innovation and Technology ()
I congratulate the hon. Member for Bristol North West () on securing this excellent
debate and on his excellent opening speech. The issue ahead of us
is an international issue, and as he said, the UK is at the
forefront of AI development, with our history and with the Turing
and Lovelace institutions around the country. We have amazing AI
clusters, and it is right that we should be at the forefront of
the solutions he talked about. It will not have escaped many of
us with a long-standing interest in AI that this is a really
important time for the technology’s development. Of equal note is
the focus that the Government are giving to ensuring that we
seize the opportunities of AI while tackling the risks that have
been highlighted, along with our commitment to iterating and
adapting our approach as the technology continues to develop.
I welcome the opportunity to speak about how we are delivering on
the commitments of the national AI strategy, including shaping
the international governance of AI through active engagement in
key multilateral fora such as UNESCO. I believe we are well
placed to become a global AI superpower by delivering on the
foundations laid down in the national AI strategy and its three
pillars: investing in and planning for the long term needs of the
AI ecosystem; supporting the transition to an AI-enabled economy,
capturing the benefits of innovation in the UK and ensuring that
AI benefits all sectors and regions; and ensuring that the UK
gets the national and international governance of AI technologies
right to encourage innovation and investment and to protect the
public and our fundamental values.
The Government recognise that AI has the potential to transform
all areas of life, from making more medical breakthroughs
possible to powering the next generation of tech such as
driverless cars. In 2021 we published our national AI strategy—a
10-year vision to make the UK an AI superpower. Since 2014, we
have invested over £2.5 billion in AI, including almost £600
million towards the near £1 billion 2018 AI sector deal, which
kick-started the growth of the already well-established AI
landscape in the UK; £250 million to develop the NHSX AI lab to
accelerate the safe adoption of AI in health and care; £250
million for the Centre for Connected and Autonomous Vehicles to
develop the future of mobility in the UK; investment in the Alan
Turing Institute, with over £46 million to support Turing AI
fellowships to develop the next generation of top AI talent; and
over £372 million of investment in UK AI companies through the
British Business Bank.
The AI strategy also emphasises the need to invest in skills and
diversity to broaden the AI workforce. Our £30 million AI and
data science conversion course and scholarship programme was set
up to address the lack of diversity and supply of talent in the
UK AI labour market—that is diversity not as in a tick-box
exercise, as some might be, but diversity of thinking to ensure
that AI products, services and development have the broader
thinking that the hon. Member rightly talked about.
Alongside skills, the Government recognise the need for long-term
investment in computing. In March, we announced £900 million for
an exascale supercomputer and AI research resource. Building on
that, last month we announced £100 million in initial start-up
funding for a foundation model taskforce to invest in the AI
stack to build foundation model capability, ensure capabilities
for key use cases and ensure UK leadership in the safety and
reliability of foundation models.
We have seen huge leaps forward in our delivery on the governance
pillar of the national AI strategy. In March, we published a
White Paper setting out the UK’s context-based, proportionate and
adaptable approach to AI regulation, representing a world-leading
step forward in this policy space. The White Paper outlines five
clear outcome-focused principles that regulators should consider
to facilitate the safe and innovative use of AI in the industries
that they monitor. Crucially, the principles provide clarity to
businesses by articulating what we want responsible AI to look
like.
That is not all. In October 2022, we launched the AI standards
hub to increase the UK’s contribution to the development of
global AI technical standards. Through the hub, we are working
with international initiatives such as the OECD’s catalogue of
tools and metrics for trustworthy AI to increase global awareness
of technical standards as critical tools to advance the worldwide
development and adoption of responsible AI.
On that note, I turn my focus squarely to international
engagement on AI, which is a key priority for the Government. As
a world leader in AI, we play an important role in shaping the
international development and governance of AI. We promote our
interests in bilateral relationships with key partners such as
the US and Japan and in multilateral fora such as the Council of
Europe, the Global Partnership on Artificial Intelligence,
UNESCO, the OECD, the G7, the International Organisation for
Standardisation and International Electrochemical Commission.
With the US, we held the inaugural meeting of the comprehensive
dialogue on technology and data in January. A key deliverable for
2023 is to strengthen the UK-US collaboration on AI technical
standards development and tools for trustworthy AI, including
through joint research and information sharing, and support for
commercial co-operation. We had previously signed in September
2020 a US-UK declaration on co-operation in AI research and
development, representing a shared vision for driving
technological breakthroughs in AI. With Japan, as the hon. Member
rightly said, we agreed the Hiroshima accord only recently, on 18
May. It is a landmark new global strategic partnership,
signifying our intent to work together to maintain strategic
advantage in emerging technologies such as AI. The accord builds
on the UK-Japan digital partnership that I launched in December
2022, which established a framework for deeper UK-Japan
collaboration across digital infrastructure and technologies,
data, digital regulation and digital transformation.
We have also been working closely with Japan as part of its G7
presidency this year. At the end of April, I attended the G7
digital ministerial meeting in Japan, where I signed the G7
digital ministerial declaration alongside my counterparts. That
declaration emphasises the importance of responsible AI and
global AI governance. It endorses an action plan for promoting
global interoperability between tools for trustworthy AI and for
co-operating on upcoming AI opportunities and challenges.
At the Council of Europe, we are working closely with like-minded
nations on the proposed convention on AI—a first-of-its-kind
legal agreement to help protect human rights, democracy and the
rule of law. At the OECD, we are an active member of the working
party on AI governance, which supports the implementation of the
OECD’s AI principles. It enables the exchange of experience from
best practice to advance the responsible stewardship of AI. At
the global partnership, we are a key contributor and founding
member. At the 2022 GPAI ministerial summit in Japan, we
announced £1.2 million of funding to develop a net zero data
space for AI applications, which is in addition to a previous £1
million investment to advance GPAI research on data justice,
collaborating with our world-leading Alan Turing Institute and 12
pilot partners in low and medium-income countries.
We are also leading the development of global AI technical
standards in standards development organisations such as the
International Organisation for Standardisation and the
International Electrotechnical Commission, and we are leading the
development of AI assurance techniques as additional tools for
trustworthy AI. Crucially, these techniques help to measure,
evaluate and communicate the trustworthiness of AI systems across
the development and deployment life cycle, to enable
organisations to determine whether AI technologies are aligned
with regulatory requirements.
We are also aware of the increasing prominence of AI in
discussions held across other UN fora, including the Internet
Governance Forum and the International Telecommunication Union,
and through the Global Digital Compact’s focus on AI. The
Government welcome the opportunity that the compact provides for
the multi-stakeholder community to set out an ambitious shared
agenda, chart a path for concrete action towards delivering it,
and promote the sharing of best practice, evidence and
learning.
Let me turn my attention to UNESCO. The UK was actively involved
in the development of its recommendation on the ethics of AI, and
UK organisations such as the Alan Turing Institute have supported
the development of implementation tools. As we have heard, we,
along with all 192 other UNESCO member states, adopted the
recommendations in November 2021, demonstrating our commitment to
developing a globally compatible system of responsible and
ethical AI governance.
Our work aligns with the values of UNESCO’s recommendation. For
example, through our work at the Council of Europe negotiations,
we are helping to respect, protect and promote human rights,
fundamental freedoms and human dignity. In doing so through close
collaboration with our international partners, we aim to ensure
that our citizens can live in peaceful, just and interconnected
societies. Through our AI and data science conversion course and
scholarship programme, we are ensuring diversity and
inclusiveness by addressing these issues in the UK AI labour
market. Finally, as one small example of the wider work we are
delivering, through our net zero data space for AI applications,
funded through GPAI, we are delivering on our net zero policy
objectives, ensuring a flourishing environment and ecosystem.
In summary, we have taken great strides in our delivery of the
national AI strategy under all three pillars: investing in and
planning for the long-term needs of the AI ecosystem; supporting
the transition to an AI-enabled economy; and ensuring that the UK
gets the national and international governance of AI technologies
right. It goes without saying that the opportunities afforded by
AI are quite staggering. Indeed, as a result of AI technologies,
UK productivity could rise by up to a third across sectors, and
UK GDP could be 10.3% higher in 2030 as a result of AI—the
equivalent of an additional £232 billion.
But the hon. Gentleman is also absolutely right to look at the
risks and talk about the dangers. We have to do this on an
international basis. The AI White Paper was the first of its
kind, although I would urge him to exercise caution when he says
that we do not feel that we need legislation. At the moment, we
are building on the layers of existing regulation, but the White
Paper outlines the five principles, and we are looking at the
regulatory sandboxes to test regulation with scientists, the
sector and the academics involved, so that we can co-create the
solutions that will be required. But we clearly have to do this
at pace, because it was only a few months ago that we first heard
of ChatGPT, and we now have prompt engineers, a new, relatively
well paid occupation that until recently no one had ever heard
of.
As a world leader in AI, it is imperative that we continue to
actively engage bilaterally and in multilateral fora such as
UNESCO, but also in the OECD, the GPAI and others, to shape the
international AI governance landscape. Governing it effectively
will ensure that we achieve the right balance between responding
to risks and maximising the opportunities afforded by this
transformative technology.
Question put and agreed to.
|