Mick Whitley (Birkenhead) (Lab) I beg to move, That this House has
considered the potential impact of artificial intelligence on the
labour market. It is a privilege to serve under your chairmanship,
Dame Maria. I am grateful to all hon. Friends and Members who have
taken the time to participate in this important debate. It is a
particular pleasure to see my hon. Friend the Member for Ellesmere
Port and Neston (Justin Madders) in his place. I wish to draw
the...Request free trial
(Birkenhead) (Lab)
I beg to move,
That this House has considered the potential impact of artificial
intelligence on the labour market.
It is a privilege to serve under your chairmanship, Dame Maria. I
am grateful to all hon. Friends and Members who have taken the
time to participate in this important debate. It is a particular
pleasure to see my hon. Friend the Member for Ellesmere Port and
Neston () in his place. I wish to
draw the attention of the House to my entry in the Register of
Members’ Financial Interests.
The rapid advance of artificial intelligence technology poses a
severe threat to the labour market and to workers’ rights. The
negative effect of AI on the workforce cannot be ignored, as it
has the potential to displace jobs, lead to economic inequality
and erode the rights of workers. AI has the capability to
automate jobs and various industries, which could result in
widespread unemployment and exacerbate existing socioeconomic
disparities. Low-skilled workers, who are already vulnerable to
exploitation, are likely to be the most impacted, leading to a
growing divide between the haves and the have-nots.
Furthermore, the implementation of AI in the workplace could
result in the violation of workers’ rights such as privacy,
autonomy and fair pay. The use of AI to monitor and control
workers could lead to increased exploitation, discrimination and
the creation of a toxic work environment. If left unchecked, the
rise of AI could lead to a future where workers are replaced by
machines, and human dignity is sacrificed for the sake of
corporate profits. The deployment of AI in the workplace must be
accompanied by strong regulations and policies that prioritise
the wellbeing and rights of workers.
Governments and companies must take responsibility for the
harmful impact of AI on the labour market and take immediate
action to prevent its negative effects. Failure to do so would
result in an irreparable loss of jobs, economic inequality and a
violation of workers’ basic rights.
For Members who have heard me speak before in this House, that
introduction must have felt unusually stilted, or perhaps
uncharacteristically eloquent. That is because it was written
entirely by ChatGPT—one of a number of increasingly sophisticated
AI chatbots that have become readily accessible in the past few
months. At this point, let me reassure my parliamentary
researcher, who is watching this debate, that he does not need to
worry about his P45—yet. The unusual distinction of being the
first Member of Parliament to openly read AI-generated text into
Hansard belongs to the hon. Member for Bosworth (Dr Evans). Like
him, I have chosen to turn to one of the most widely used
AI-powered technologies to illustrate the rapid advances taking
place in the field of artificial intelligence, and the
potentially devastating consequences that this could have on
workers in every sector of the economy.
Of course, the impacts of this AI revolution will be felt far
beyond the labour market. Information is an increasingly valuable
commodity; it is also a potential weapon of war. The danger is
simple: technologies such as ChatGPT and DALL-E could be used to
proliferate dangerous misinformation and subvert our already
compromised democracy. We need further and extensive scrutiny of
the risks and of the steps that we need to take to better protect
our constituents’ data privacy.
I have chosen to use the limited time available today to look at
the impact of artificial intelligence on the labour market, and
particularly on workers’ rights. That is not only because I have
spent my adult life fighting for workers’ rights, but because it
is in the labour market that that change is happening most
rapidly, and it is in the everyday experience of work that the
disruption of AI is being most keenly felt.
We have heard much in recent years about how we stand on the edge
of a fourth industrial revolution. That revolution is now well
under way; its effects will be profound and far-reaching. Every
part of our public life will be transformed. I want to be clear:
I am no enemy of progress. We should embrace the potential of AI
to change our lives for the better, whether by improving
diagnosis and treatment of disease or by driving sustainable
economic growth that can benefit us all. Just as the first
industrial revolution brought about an era of unprecedented
wealth for an elite few but condemned the British working class
and colonised people across the world to a life of precarity and
poverty, the AI revolution will create again—if we allow it to do
so—a world of winners and losers.
(East Kilbride, Strathaven
and Lesmahagow) (SNP)
I thank the hon. Gentleman for making an impressive speech and
extremely good points about the welfare of workers. As a union
rep, I agree that we must have safeguards around AI developments.
Does he agree that in order to make this new technology available
to all, we should seek to level up across the UK and ensure that
coding opportunities and the jobs of the future are available to
young people in all areas, including deprived areas?
The hon. Member makes a good point. When it comes to AI, all
workers need protections.
Research by PricewaterhouseCoopers suggests that AI will be
responsible for 46% of the UK’s long-term output growth. It
promises job creation in sectors such as health, education, and
science and technology. At the same time, it threatens
devastating job losses in sectors such as manufacturing,
transport and public administration. Some 7% of all UK jobs could
be automated away within the next five years, and as many as 30%
could disappear within 20 years.
The last time we experienced systemic economic displacement on
anything like that scale was during the deindustrialisation of
the 1980s and 1990s. The architects of that policy believed that
nothing should be done to support those communities that carried
the cost of the economic and social fallout, the legacy of which
my constituency of Birkenhead continues to live with to this day.
They followed the ancient mantra that the strong do what they can
and the weak suffer what they must. We must not repeat that
mistake again. I have called today’s debate to make an urgent
plea for a rights-based and people-focused approach to artificial
intelligence, and for a process that puts the voices and
interests of workers at its heart. In this new machine age, we
must assert more than ever the fundamental right of all people to
a basic level of economic security and dignity at work.
(Sleaford and North
Hykeham) (Con)
The hon. Gentleman makes an interesting point, much of which I
support. It is not controversial to suggest that the NHS would
benefit from more doctors or that digital tech has the potential
to improve people’s lives. The Health and Social Care Committee
has been looking at both of those issues. As part of one of its
inquiries, the Committee went to San Francisco about a month ago
to look at how AI can help in medicine. We found that computers
can be taught to read mammograms of breast screening tests. That
means that, rather than having to be read by two independent
doctors, the mammograms can be read by one doctor and one
computer. Apparently, the process is more accurate than one
involving two computers or, indeed, two doctors. Therefore, AI
has the potential not just to cause the workforce issues raised
by the hon. Gentleman, but to benefit areas with workforce
shortages.
I thank the hon. Member for those points. I have already said
that we must embrace AI and what it does for us. We are not here
to stop progress, but my point is that the Government need to
build in regulatory rights and protections.
The benefits of this new technological revolution must be shared
by everyone, not just an elite few. I do not claim to have the
answers to a challenge of such enormous magnitude—I look forward
to hearing hon. Members’ thoughts in a few moments’ time—but a
starting point must surely be guaranteeing support to those
sectors and communities that will be most affected by the threat
and reality of economic displacement. That means strengthening
our collective social security net and seriously considering the
role that a universal basic income might play in ensuring a
decent standard of living in a labour market increasingly
characterised by job scarcity. It means investing in skills and
lifelong learning, ensuring that workers whose employment is lost
to AI have the opportunity to find well-paid and similarly
rewarding work.
In any democracy we have to recognise that technology is never
ideologically neutral. Every technological system reflects the
interests and biases of its creators and funders. Our challenge
is to ensure that AI technologies reflect a multiplicity of
voices, including those of workers, and not just in their
application but in their conception and design as well. I hope we
will continue to discuss how we can achieve that.
A people-focused approach to AI must also mean doing more to
guarantee the rights of those workers who are already working
alongside artificial intelligence and related technologies in
their workplace. The AI working group set up by the Trades Union
Congress surveyed thousands of workers in producing its report on
the worker experience of AI and associated technologies. It shows
vividly how workers are increasingly managed by machines, how
their rights and autonomy are being steadily eroded, and how
automated processes often perpetuate human prejudice when making
decisions on employees’ performance, hiring and promotions.
The Government’s response was set out in the Department for
Science, Innovation and Technology’s recently published AI White
Paper, which advocates a light-touch approach and effectively
leaves the market to regulate itself. Although Ministers have
devised five fundamental principles that should inform the
adoption and use of AI in workplaces, they do not intend to place
those principles on a statutory footing. Instead, the
implementation of those principles will be left to underfunded
and overstretched regulators, such as the Information
Commissioner’s Office and the Equality and Human Rights
Commission.
That contrasts starkly with the models adopted by other developed
economies. The European Union’s Artificial Intelligence Act is
likely to be one of the most comprehensive pieces of legislation
ever passed on this subject, while California—the very centre of
global technology innovation—is preparing to implement measures
to protect the privacy and civil liberties of workers. These
measures include a new office for AI, with the authority to guide
the development of new automated systems, as well as statutory
restrictions on the use of automated decision making in the
workplace.
The proposal set out by the TUC’s AI manifesto, copies of which I
have brought to Westminster Hall for Members today, involves
taking a very different position from that taken by the
Government. Building on the existing framework of equalities
legislation, it calls for a rights-based approach to manage the
transition to AI that would strengthen equality protections,
guarantee workers the right to human contact and require a human
review of high-risk decisions that have been automated, and
protect the right to disconnect for all workers. It is also
absolutely right to acknowledge the need to listen to
workers—their voices and their experiences—in managing this
transition. It is essential that we recognise and value the role
of trade unions as a vehicle for getting those voices heard.
It is for those reasons that the manifesto proposes a statutory
duty for employers to consult trade union representatives before
adopting AI and associated technologies. It is also why the
manifesto urges employers to agree collective agreements with
unions to govern the use of AI in the workplace.
Last December, when I questioned the then Business Secretary—the
right hon. Member for Welwyn Hatfield ()—on the merits of introducing
a statutory duty to consult, he expressed interest and offered to
meet me to discuss it further. I think the Minister present today
will remember that, and I am interested to hear whether he and
the new Business Secretary share the right hon. Gentleman’s
interest.
Finally, the manifesto emphasises the fact that workers’
participation can be achieved only if workers understand the
processes and technologies at work. In environments in which
decisions are increasingly dictated by machines, people need to
know, more than ever, what data is being held on them and how it
is used.
I am aware that time is short and I look forward to hearing other
hon. Members’ contributions. I will conclude my remarks by saying
that on 17 May I will introduce a ten-minute Rule Bill that
builds on the TUC’s important work and which I hope will bring us
a bit closer to the rights-based approach I am advocating and
which we urgently need. I ask any colleagues interested in
supporting that Bill to speak to me after this debate.
2.44pm
(Folkestone and Hythe)
(Con)
It is a pleasure to serve under your chairship this afternoon,
Dame Maria, and I congratulate the hon. Member for Birkenhead
(), both on securing this very
important debate and on his excellent speech.
Artificial intelligence is an enabling technology. It is driving
the digital age, but it is based on a series of points of data
that are gathered by computer systems and processed in order to
make decisions. It still requires a huge amount of human
intervention in determining what data will be drawn on and
therefore what decisions should be made. Consequently, there has
to be a level of human responsibility, as well.
We can see already from the development of AI that it is not just
question of computer systems learning from existing patterns of
behaviour; they are also effectively thinking for themselves. The
development of AI in chess is a good example of that. Not only
are AI systems learning to make the moves that a human would
make, always selecting the perfect combination and, therefore,
being much more successful. When given the command to win the
game, AI systems have also developed ways of playing that are
unique, that the human mind has not thought of or popularised,
and that are yet more efficient at winning. That is very
interesting for those interested in chess. Perhaps not everyone
is interested in chess, but that shows the power of AI to make
autonomous decisions, based on data and information it is given.
Humans invented the game of chess, but AI can learn to play it in
ways not thought of by humans.
The application of AI in the defence space is even more scary, as
touched on by the hon. Member for Birkenhead. AI-enabled weapons
systems can be aggressive, make decisions quickly and behave in
unpredictable ways. The human strategist is not able to keep pace
with them and we would require AI-driven defence systems to
protect ourselves from them. It would be alarming to live in a
world where aggressive technology driven by AI can be combatted
only by AI, with no human intervention in the process. It is
scary to think of a security situation, like the Cuban missile
crisis in the 1960s, where the strategies are pursued solely by
AI. Therefore, we will have to think as we do in other areas of
warfare, where we have bans on certain types of chemical weapons.
There are certain systems that are considered so potentially
devastating that they will not be used—there are moratoriums on
their use and deployment. When thinking about AI in the defence
space, we may well have to consider what security to build into
it as well. We also need to think about the responsibility of
companies that develop AI systems just for their commercial
interests. What responsibility lies on them for the systems that
they have created?
The hon. Gentleman was right to say that this is like an
industrial revolution. With industrial revolutions comes great
change. People’s ways of living and working can be disrupted, and
they are replaced by something new. We cannot yet say with
certainty what that something new could be. There are concerns,
which I will come to in a moment, about the regulation of AI.
There could be amazing opportunities, too. One can imagine
working or classroom environments where children could visit
historical events. I asked someone who works in education
development how long it could take before children studying the
second world war could put on a headset, sit in a virtual House
of Commons and watch Winston Churchill deliver one of his famous
speeches, as if they were actually sitting there. We are talking
about that sort of technology being possible within the next
decade.
The applications for learning are immense. Astronauts who
practise going to the international space station do so from
metaverse-style, AI-driven virtual spaces, where they can train.
At the same time as we think about the good things that it can
do, we should also consider the fact that very bad spaces could
be created. In our debates on the Online Safety Bill, we have
been concerned about abusive online behaviour. What if such
abusive behaviour took place in a video chatroom, a virtual
space, that looks just as real as this room? Who would be
responsible for that?
It is beholden on the companies that develop these new
technologies and systems to have responsibility for the output of
those systems. The onus should be on the companies to demonstrate
that what they are developing is safe. That is why my right hon.
Friend the Chancellor of the Exchequer was right to set out in
the Budget statement last year that the Government would fund a
new AI sandbox. We have seen AI sandboxes developed in the EU. In
Washington state in the United States, AI sandboxes are used to
research new facial recognition technologies, which is
particularly sensitive. The onus should be on the developer. The
role of the regulator should be to say, “There are certain
guidelines you work within, and certain things we might consider
unsafe or unethical. You develop your technologies and new
systems and put them through a sandbox trial. You make it easy
for the regulator to ask about the data you are drawing from, the
decisions the system you have put in place is making, the
outcomes it is creating and whether they are safe.”
We have already seen that learned behaviour through data can
create unfair biases in systems. There was a case where Amazon
used AI to sift through CVs for recruitment. The AI learned that
it was largely men hired for the roles, and therefore discarded
the CVs of women applying for the position because it assumed
they would not be qualified. We should be concerned about biases
built into data systems being exacerbated by AI.
Some people talk about AI as if it is a future
technology—something coming—but it exists today. Every one of us
experiences or interacts with AI in some way. The most obvious
way for a lot of people is through the use of apps. The business
model of social media apps is driven by recommendation, which is
an AI-driven system. The system—Facebook, TikTok, Instagram or
whatever it is—is data profiling the user and recommending
content to keep them engaged, based on data, and it is AI driving
those recommendation tools.
We have to be concerned about whether those systems create unfair
practices and behaviours in the workplace. That is why the hon.
Member for Birkenhead is right to raise this issue. If a gig
economy worker—a taxi driver or a delivery courier—is paid only
when they are in receipt of jobs on the app, does the app create
a false incentive for them to be available for work all the time?
Do they have to commit to being available to the app for most of
the day, because if they do not it drives the work to people who
have high recommendation scores because they are always
available? Do people who cannot make themselves available all the
time find that the amount they can earn is much less, if they do
not get paid for waiting time when they use such apps? If that
becomes the principal way in which a lot of tasks are driven, AI
systems, which are built to be efficient and make it easy for
people to access the labour market, could create biases that
favour some workers over others. People with other jobs or family
commitment, in particular, might not be able to make themselves
available.
We should consider not just the way the technology works but the
rights that citizens and workers have if their job is based on
using those apps. The employer—the app developer—should treat the
people who work for them as employees, rather than as just
freelance agency workers who happen to be available at any
particular time of the day. They have some sort of working
relationship that should be honoured and respected.
The basic principle that we should apply when we think about the
future of AI and its enormous potential to create growth and new
jobs, and build fantastic new businesses, is that the rights that
people enjoy today—their rights as citizens and employees—should
be translated into the future world of technology. A worker
should not lose their working rights simply because their
relationship with their employer or their customer is through an
app, and because that experience is shaped by the collection and
processing of data. Ultimately, someone is doing that processing,
and someone has created that system in order to make money from
it. The people doing that need to be responsible for the
technology they have created.
2.52pm
(Watford) (Con)
It is a privilege to speak in this debate, and I thank the hon.
Member for Birkenhead () for securing it. I wanted to
apply for it myself—he beat me to the chase, which is a wonderful
thing.
Before I became an MP, one of my final clients was in the AI
space. It dealt with artificial intelligence and psychology—I
believe that my first entry in the Register of Members’ Financial
Interests was my final bit of work for it—so I have seen this
technology evolve over many years. We often talk about technology
revolutions, but this has been an incredibly fast evolution.
We are seeing Moore’s law, which related to the size and scale of
technology, affect society. The scale of what is happening right
now is both inspirationally amazing and terrifying at the same
time. It will absolutely shape the job market and the type of
jobs that come through over the next few years. It will shape how
people interface with their co-workers, with technology, with
society and with politicians. It will affect every aspect of our
lives.
I am particularly concerned about the use of artificial
intelligence for deception. I have long said—not necessarily in
the Chamber, so I put it on the record now—that there should be
in law something that I would call the Turing clause. It would
mean that when technology is used to deceive somebody into
believing that they are talking to a real person or engaging with
a real business, whether for entertainment or for any other
purpose—for instance watching a deepfake, which is perhaps for
entertainment purposes—it must be crystal clear to them that they
are being deceived.
I will give some examples. I was recently speaking to somebody
who works in the entertainment industry, running studios where
they record sound, voiceovers and music. They said—I should
declare that I do not know the scale of this issue and have not
looked into the numbers—that lot of the studios are often being
used to record voiceovers for AI companies, so that the AI can
learn how to speak like a real person. We all know about fraud
and scams in which somebody gets phoned up from a call centre and
told, “Your insurance is up,” or by someone pretending to be from
the Government. We saw, awfully, during the covid crisis how
those horrible people would try to scam people. Doing that
requires a number of people in a space.
Now imagine that AI can pretend to be somebody we know—a family
member, for instance—and imitate their voice. It could call up
and say, “I need some money now, because I am in trouble,” or, “I
need some support.” Or it could say, “This is somebody from the
Government; your tax affairs are an issue—send your details now.”
There are a whole load of things going on in society that we will
not know about until it is too late. That is why a Turing clause
is absolutely essential, so that we are ahead of the curve on
deception, deepfakes and areas where technology will be used to
fool.
One incredibly important area in relation to the labour market
that is not often talked about is the role of AI in creativity.
DALL-E 2 is one of the tools, and there are many others popping
up now. They can create artwork and videos almost at the speed of
thought—typing in a particular phrase will create amazingly
beautiful pictures—but they are pooling those from places where
real artists and real musicians, with particular styles, have
contributed. That is then presented as AI creativity. That could
kill the graphic design industry. It could prevent people who are
in the early stages of life as an artist, in both the visual and
music worlds, from ever having an opportunity to be
successful.
Just recently, Drake and the Weeknd—if I have those artists
correct—had a song that was put online. I think that it even went
on Spotify, but it was definitely on some streaming services.
Everybody thought, “Gosh, this is a fantastic new collaboration.”
It was not. It was AI pretending to be both of those artists with
a brand new song. Artificial intelligence had created it. It was
not until after the fact, and after the song had been streamed
hundreds of thousands of times, that the big music companies
said, “Hang on—that isn’t real. We need to stop this.” Then it
was stopped.
In the case of social media, it took us many years to get to the
fantastic Online Safety Bill. I was very fortunate to be on the
Draft Online Safety Bill Joint Committee. Its Chair, my hon.
Friend the Member for Folkestone and Hythe (), is in the room today, and
he did a fabulous job. Getting to that point took 10 or 15 years.
We do not have 10 or 15 months to legislate on AI. We probably do
not have 10 or 15 weeks, given where we will be in a matter of
days, with the new announcements and tools that are coming
out.
Dr Cameron
I thank the hon. Gentleman for making those extremely important
points. Just last week, we had the Children’s Parliament at the
all-party parliamentary group on the metaverse and web 3.0. The
children were excited about the opportunities of AI and the
metaverse, and we were told on the day that the World Economic
Forum predicts that technology will create 97 million new jobs by
2025 alone. But like the hon. Gentleman, they were also very
concerned about what is real and what is not, and they were
concerned about the mental health impact of spending much of the
day in an altered reality setting. Does the hon. Gentleman agree
that we need much more research into the mental health impact on
staff and young people who are engaged in AI?
I thank the hon. Member for her comments. Mental health is a
passion of mine—I had a ten-minute rule Bill about ensuring that
mental health first aiders are in the workplace—and I agree
wholeheartedly. We saw that in evidence given to the Draft Online
Safety Bill Joint Committee; Rio Ferdinand talked, including in
his documentary, about the fact that what is said online can
affect a person’s real life. The challenge with artificial
intelligence is that it will not just be able to say those
things; it will probably know precisely how to do the most harm,
how to hit the right triggers to make people buy things and how
to fool and deceive people to ensure they hand over money or
their rights.
I will move on because I am conscious of time. I know we have
quite a long time for this debate, but I do not intend to use it
all; I promise. I think that the creativity part is absolutely
essential. A few weeks ago, I predicted in Parliament that, in
the next year or so, a No. 1 song will be created by artificial
intelligence for the first time. I have no doubt that a No. 1
bestselling book will be written by artificial intelligence. I
have no doubt that new songs in the voices of artists who are no
longer around, such as Elvis Presley, will be released, and that
actors who are sadly no longer alive will play starring roles in
new films. We are seeing this already on a soft scale, but it is
going to become more and more pervasive.
It is not all negative. I do not want to be a doomsayer. There
are great opportunities: Britain—this wonderful country—could be
the home of identifying and delivering transparency within those
industries. We could be the country that creates the technology
and the platforms to identify where artificial intelligence is
being used; it could flag up when things are not real. It could,
for example, force organisations to say who they are, what they
are doing and whether they have used artificial intelligence. I
think that will create a whole new world of labour markets and
industries that will stem from this country and create all the
jobs that we talked about earlier.
I am also concerned that we do not often talk in the same breath
about artificial intelligence and robotics. In the industrial
world, such as in warehouses and so on, there has been a rise in
the use of robotics to replace real people. Office jobs are
changing due to artificial intelligence. The role of accountants,
of back-office staff and of both blue and white-collar workers
will change.
As was stated earlier, the challenge with robotics is on things
such as defence. Artificial intelligence is being used in
robotics to get way ahead of the scale of where we are now. We
really need to take that seriously. ChatGPT was probed. People
tried to catch it out on different aspects of its response. When
asked how it would steal the nuclear codes, it outlined how it
would do it. I am not trying to give any bad actors out there any
ideas, but it explained how it would use AI to control drones,
and how they would be able to go in and do certain things.
Hopefully, it got it all wrong. However, if AI is in not just our
computers and mobile phones, but in drones and new robots that
are incredibly sophisticated, incredibly small and not always
identifiable, we need to be really wary.
There are many positives, such as for detection in the health
sector and for identifying things such as breast cancer.
Recently, I have seen lots of work about how artificial
intelligence could be layered on the human aspect and insight,
which was mentioned earlier, and enable the identification of
things that we would not normally be able to see.
There is huge positive scope for using data. I have said
previously that, if we were to donate our health data to live
clinical trials in a way that was legitimate and pseudonymised,
artificial intelligence could be used to identify a cure for
cancer and for diseases that have affected our society for many
centuries. In the same way that it has found new ways of playing
chess, it might find new ways of changing and saving lives. There
is great opportunity there.
Many years ago, I wrote an article called, “Me, Myself and AI”.
In it, I commented on areas where AI is dangerous, but I also
mentioned opportunities for positives. I would like to make one
final point on this: we must also make sure that the data that
goes into the AI is tracked not only for things such as royalties
in creative industries, but for bias. I wrote an article on that
a while ago. If we take a sample, say within a health context,
and take that data based on only one ethnicity or demographic,
the AI will develop options and solutions for that group. If we
do not have the right data, regarding diversity, going into the
analysis, we risk not being able to identify future issues. For
example, sickle cell disease might get missed because the data
that the AI is using is based only on clinical trials with white
people.
There is a wide-ranging issue about what is being fed into the
systems around AI and how we ensure that we identify where AI is
being used—hence my point about a Turing clause when it comes to
deception. We also need to know where it is being used, including
in Government. We need to look at the opportunities, too: whole
new industries around how we monitor AI, apply it and use the
science of it.
AI is already there in the spelling of “Great Britain”. We have a
great opportunity to be ahead of the curve, and we need to be
because the curve will be moving beyond us within a matter of
weeks or months—and definitely within years.
3.06pm
(Gordon) (SNP)
It is a pleasure to serve under your chairship this afternoon,
Dame Maria, and to take part in this particularly timely debate.
I congratulate the hon. Member for Birkenhead () on securing it.
I begin by declaring a rather tenuous interest—a constituency
interest of sorts—regarding the computing pioneer Alan Turing.
The Turing family held the baronetcy of Foveran, which is a
parish in my constituency between the north of Aberdeen and
Ellon. Although there is no evidence that Alan Turing ever
actually visited, it is a connection that the area clings to as
fastly as it can.
Alan Turing, of course, developed what we now know as the Turing
test—a test of a machine’s ability to exhibit intelligent
behaviour equivalent to, or indistinguishable from, that of a
human. One of the developments to come closest to that in recent
times is, of course, ChatGPT, which several speakers have
mentioned already. It is a natural-language processing tool
driven by AI technology, which has the ability to generate text
and interact with humans.
The hon. Member for Birkenhead was a bit braver than I was; I
only toyed with the idea of using ChatGPT to produce some of my
speech today. However, I was put off somewhat by a very good
friend of mine, with an IT background, using the ChatGPT
interface to produce a biography of me. He then shared it with
his friendship group on Facebook.
I think it is fair to say that it shows up clearly that if
ChatGPT does not know the answer to something, it will fill the
gap by making up something that it thinks will sound plausible.
In that sense, it is maybe no different from your average Cabinet
Minister. However, that does mean that, in subject areas where
the data on which it is drawing is rather scant, things can get
quite interesting and inventive.
The hon. Gentleman makes an incredibly important point. When AI
systems such as that are asked questions that they do not know,
rather than responding, “I don’t know,” they just make something
up. A human is therefore required to understand whether what they
are being showed is correct. The hon. Gentleman knows his own
biography better than ChatGPT does, but someone else may not.
I thank the hon. Member for that intervention. He has perhaps
read ahead towards the conclusion of my speech, but it is an
interesting dichotomy. Obviously, I know my biography best, but
there are people out there, not in the AI world—Wikipedia
editors, for example—who think that they know my biography better
than I do in some respects.
However, to give the example, the biography generated by AI said
that I had been a director at the Scottish Environmental
Protection Agency, and, prior to that, I had been a senior
manager at the National Trust for Scotland. I had also apparently
served in the Royal Air Force. None of that is true, but, on one
level, it does make me want to meet this other who exists out there. He
has clearly had a far more interesting life than I have had to
date.
Although that level of misinformation is relatively benign, it
does show the dangers that can be presented by the manipulation
of the information space, and I think that the increasing use and
application of AI raises some significant and challenging ethical
questions.
Any computing system is based on the premise of input, process
and output. Therefore, great confidence is needed when it comes
to the quality of information that goes in—on which the outputs
are based—as well as the algorithms used to extrapolate from that
information to create the output, the purpose for which the
output is then used, the impact it goes on to have, and, indeed,
the level of human oversight at the end.
In March, Goldman Sachs published a report indicating that AI
could replace up to 300 million full-time equivalent jobs and a
quarter of all the work tasks in the US and Europe. It found that
some 46% of administrative tasks and even 44% in the legal
professions could be automated. GPT-4 recently managed to pass
the US Bar exam, which is perhaps less a sign of machine
intelligence than of the fact that the US Bar exam is not a
fantastic test of AI capabilities—although I am sure it is a
fantastic test of lawyers in the States.
Our fear of disruptive technologies is age-old. Although it is
true to say that generally what we have seen from that disruption
is the creation of new jobs and the ability to allow new
technologies to take on more laborious and repetitive tasks, it
is still extremely disruptive. Some 60% of workers are currently
in occupations that did not exist in 1940, but there is still a
real danger, as there has been with other technologies, that AI
depresses wages and displaces people faster than any new jobs can
be created. That ought to be of real concern to us.
In terms of ethical considerations, there are large questions to
be asked about the provenance of datasets and the output to which
they can lead. As The Guardian reported recently:
“The…datasets used to train the latest generation of these AI
systems, like those behind ChatGPT and Stable Diffusion, are
likely to contain billions of images scraped from the internet,
millions of pirated ebooks”
as well as all sorts of content created by others, who do not get
reward for its use; the entire proceedings of 16 years of the
European Parliament; or even the entirety of the proceedings that
have ever taken place, and been recorded and digitised, in this
place. The datasets can be drawn from a range of sources and they
do not necessarily lead to balanced outputs.
ChatGPT has been banned from operating in Italy after the data
protection regulator there expressed concerns that there was no
legal basis to justify the collection and mass storage of the
personal data needed to train GPT AI. Earlier this month, the
Canadian privacy commissioner followed, with an investigation
into OpenAI in response to a complaint that alleged that the
collection, use and disclosure of personal information was
happening without consent.
This technology brings huge ethical issues not just in the
workplace but right across society, but questions need to be
asked particularly when it comes to the workplace. For example,
does it entrench existing inequalities? Does it create new
inequalities? Does it treat people fairly? Does it respect the
individual and their privacy? Is it used in a way that makes
people more productive by helping them to be better at their jobs
and work smarter, rather than simply forcing them—notionally, at
least—to work harder? How can we be assured that at the end of
it, a sentient, qualified, empowered person has proper oversight
of the use to which the AI processes are being put? Finally, how
can it be regulated as it needs to be—beneficially, in the
interests of all?
The hon. Member for Birkenhead spoke about and distributed the
TUC document “Dignity at work and the AI revolution”, which, from
the short amount of time I have had to scrutinise it, looks like
an excellent publication. There is certainly nothing in its
recommendations that anyone should not be able to endorse when
the time comes.
I conclude on a general point: as processes get smarter, we
collectively need to make sure that, as a species, we do not
consequentially get dumber. Advances in artificial intelligence
and information processing do not take away the need for people
to be able to process, understand, analyse and critically
evaluate information for themselves.
This is one point—and a concern of mine—that I did not explore in
my speech because I was conscious of its length. As has been
pointed out, a speech has been given previously that was written
by artificial intelligence, as has a question in Parliament. We
politicians rely on academic research and on the Library. We also
google and meet people to inform our discussions and debates. I
will keep going on about my Turing clause—which connects to the
hon. Gentleman’s point—because I am concerned that if we do not
have something like that to highlight a deception, there is a
risk that politicians will go into debates or votes that affect
the government of this country having been deceived—potentially
on purpose, by bad actors. That is a real risk, which is why
there needs to be transparency. We need something crystal clear
that says, “This is deceptive content” or “This has been produced
or informed by AI”, to ensure the right and true decisions are
being made based on actual fact. That would cover all the issues
that have been raised today. Does the hon. Member share that
view?
Yes, I agree that there is a very real danger of this technology
being used for the purposes of misinformation and disinformation.
Our democracy is already exceptionally vulnerable to that. Just
as the hon. Member highlights the danger of individual
legislators being targeted and manipulated—they need to have
their guard up firmly against that—there is also the danger of
people trying to manipulate behaviour by manipulating wider
political discourse with information that is untrue or
misleading. We need to do a much better job of ensuring we are
equipping everybody in society with critical thinking skills and
the ability to analyse information objectively and
rationally.
Ultimately, whatever benefits AI can bring, it is our quality of
life and the quality of our collective human capital that counts.
AI can only and should only ever be a tool and a servant to that
end.
3.16pm
(Ellesmere Port and Neston)
(Lab)
It is a pleasure to see you in the Chair, Dame Maria. This has
been a thoughtful and engaging debate on an important subject,
and the contributions have raised very important issues.
I particularly thank my hon. Friend the Member for Birkenhead
() for introducing this debate.
I thought his opening remarks about me were uncharacteristically
generous, so I had a suspicion that it did not all come from
him—if he wants to blame the computer, that’s fine! As he did, I
refer to my entry in the Register of Members’ Financial
Interests. My hon. Friend has a long history in the workplace and
has seen how automation has changed work—particularly the kind
done at Vauxhall Motors in Ellesmere Port—dramatically over many
years. What we are talking about today is an extension of that,
probably at a greater pace and with greater consequences for jobs
than we have seen in the past.
My hon. Friend the Member for Birkenhead said there will be
winners and losers in this; that is very important. We must be
cognisant of sectors affected by AI where there will probably be
more losers than winners, including manufacturing, transport and
public administration. My hon. Friend hit the nail on the head
when he said that we must have a rights-based and people-focused
approach to this incredibly complicated subject. He was right to
refer to the TUC paper about the issue. We cannot go far wrong if
we hold to the principles and recommendations set out there.
The hon. Member for Folkestone and Hythe () made an excellent
contribution, showing a great deal of knowledge in this area. He
is absolutely right to say that there has to be a level of human
responsibility in the decision-making process. His references to
AI in defence systems were quite worrying and sounded like
something from the “Terminator” films. It sounds like dramatic
science fiction, but it is a real, live issue that we need to
address now. He is right that we should ensure that developers
are able to clearly demonstrate the data on which they are basing
their decisions, and in saying that the gig economy is a big part
of the issue and that the intervention of apps in the traditional
employment relationship should not be used as a proxy to water
down employment rights.
The hon. Member for Watford () also gave a very considered
speech. He summed it up when he said that this is both amazing
and terrifying. We have heard of some wonderful things that can
be done, but also some extremely worrying ones. He gave examples
of deception, as well as of the wonderful art that can be created
through AI, and encapsulated why it is so important that we have
this debate today. Although the debate is about the potential
impacts of AI, it is clear that change is happening now, and at a
dramatic pace that we need to keep up with; the issue has been
affecting workers for some time now.
When we survey the Government’s publications on the impact of AI
on the market, it is readily apparent that they are a little bit
behind the curve when it comes to how technologies are affecting
the way work is conducted and supervised. In the 2021 report,
“The Potential Impact of Artificial Intelligence on UK Employment
and the Demand for Skills”, and the recent White Paper that was
published last month, there was a failure to address the issues
of AI’s role in the workplace. The focus in both publications was
the bigger picture, but I do not think they addressed in detail
the concerns we have discussed today.
That is not to downplay the wider structural economic change that
AI could bring. It has the potential to have an impact on demand
for labour and the skills needed, and on the geographical
distribution of work. This will be a central challenge for any
Government over the next few decades. As we have heard, the
analysis already points in that direction, with the 2021
Government report estimating that 7% of jobs could be affected in
just five years and 18% in 10 years, with up to 30% of jobs over
20 years facing the possibility of automation. That is millions
of people who may be displaced in the labour market if we do not
get this right.
I will focus my comments on the impact on individual workers,
because behind the rhetoric of making the UK an AI superpower,
there are statements about having a pro-innovation, light-touch
and coherent regulatory framework, with a desire not to legislate
too early or to place undue burdens on business. That shows that
the Government are, unfortunately, content to leave workers’
protections at the back of the queue. It is telling that in last
month’s White Paper—a document spanning 91 pages—workplaces are
mentioned just three times, and none of those references are
about the potential negative consequences that we have touched on
today. As we are debating this issue now, and as the Minister is
engaged on the topic, we have the opportunity to get ahead of the
curve, but I am afraid that the pace of change in the workplace
has completely outstripped the pace of Government intervention
over the last number of years.
It has been four years since we saw the Government’s good work
plan, which contained many proposals that might help mitigate
elements of AI’s use in the workplace. The Minister will not be
surprised to hear me mention the employment Bill, which has been
promised on many occasions and could have been an opportunity to
consider some of these issues. We need an overarching,
transformative legislative programme to deal with these matters,
and the many other issues around low pay and chronic insecurity
in the UK labour market—and we need a Labour Government to
provide that.
With an absence of direction from Government, there is already a
quiet revolution in the workplace being caused by AI. Workers
across a broad range of sectors have been impacted by management
techniques derived from the use of artificial intelligence. The
role of manager is being diluted. Individual discretion, be it by
the manager or worker, has in some instances been replaced by
unaccountable algorithms. As we have heard, such practices carry
risks.
Reports both in the media and by researchers have found that
workplaces across a range of sectors are becoming increasingly
monitored and automated, and decisions of that nature are
becoming normalised. A report on algorithmic systems by the
Institute for the Future of Work noted that that is ultimately
redefining work in much narrower terms than can be quantified by
any algorithm, with less room for the use of human judgment.
Crucially, the institute found that workers were rarely involved
in or even consulted about these types of data-driven
technologies. The changes have completely altered those people’s
experience of work, with greater surveillance and greater
intensification, and use in disciplinary procedures. Members may
be aware that there is now a greater use of different varieties
of surveillance, including GPS, cameras, eye-tracking software,
heat sensors and body-worn devices, so the activities of workers
can be monitored to an extent that was hitherto unimaginable.
Of course, surveillance is not new, but the way it is now
conducted reduces trust, and makes workers feel more insecure and
as if they cannot dispute the evidence that the technology tells
people. Most at risk of that monitoring, as the Institute for
Public Policy Research has said, are those in jobs with lower
worker autonomy, those with lower skills, and those without trade
union representation. The latter is an area where the risk
increases substantially, which tells us everything that we need
to know about the importance of becoming a member of a trade
union. The news today that the GMB is making progress in
obtaining recognition at Amazon is to be welcomed in that
respect.
Increased surveillance and monitoring is not only problematic in
itself; it can lead to an intensification of work. Testimony from
workers in one study stated that they are expected to be
conducting work that the system can measure for 95% of the
working day. Time spent talking to colleagues, using the bathroom
or even taking a couple of minutes to make a cup of tea will not
be registered as working, and will be logged for a manager to
potentially take action against the individual. That pressure
cannot be conducive to a healthy workplace in the long run. It
feels almost like automated bullying, with someone monitoring
their every move.
Many businesses now rely on AI-powered systems for fully
automated or semi-automated decision making about task
allocation, work scheduling, pay, progression and disciplinary
proceedings. That presents many dangers, some of which we have
talked about. Due to the complexities in the technology, AI
systems can sometimes be a trusted black box by those who use
them. The people using them assume that the outcome that emerges
from the AI system is free of bias and discrimination, and
constitutes evidence for the basis of their decisions, but how
does someone contest a decision if they cannot question an
algorithm?
As we have heard, there is potential for algorithmic bias. AI
technology can operate only on the basis of the information put
into it. Sometimes human value judgments form the basis of what
is fed into the AI, and how the AI analyses it. As the hon.
Member for Folkestone and Hythe mentioned, there are some famous
examples, such as at Amazon, where AI was found to be
systematically disconsidering women for particular job
applications because of the way the algorithm worked. There is
little transparency and a lack of checks and balances regarding
how the technology can be used, so there is a palpable risk of
AI-sanctioned discrimination running riot without transparency at
the forefront.
I would like the Minister to commit to looking at how the
technology works in the workplace at the moment, and to making an
assessment of what it is being used for and its potential to
discriminate against people with protected characteristics. The
Data Protection and Digital Information (No. 2) Bill will create
new rights where wholly automated decision making is involved,
but the question is: how will someone know when a fully automated
decision has been taken if they are not told about it? Is there
not a risk that many employers will slot into the terms and
conditions of employment a general consent to automated decision
making, which will remove the need for the person to be notified
all together?
A successful AI strategy for this country should not be built on
the back of the poor treatment of workers, and it is the
Government’s role to create a legal and regulatory environment
that shields workers from the most pernicious elements of these
new technologies. That cannot be fixed by introducing single
policies that tinker at the edges; it requires a long overdue
wholesale update to our country’s employment laws. As the
Minister will know, our new deal for working people will set out
a suite of policies that address that. Among other things, it
will help to mitigate the worst effects of AI, and will introduce
measures that include a right to switch off, which will guard
against some of the egregious examples of AI being used to
intensify people’s work.
As the organised representation of the workforce, trade unions
should be central to the introduction of any new technologies
into the workplace. Not only will that enable employers and their
representatives to find agreeable solutions to the challenges
raised by modern working practices, but it will encourage more
transparency from employers as to how management surveillance and
disciplinary procedures operate. Transparency has been picked up
a few times and it is key to getting this right.
Artificial intelligence’s impact is already being felt up and
down the country, but the Government have not been quick enough
to act, and its worst excesses are already out there. The need
for transparency and trust with technology is clear, and we need
to make sure that that has some legislative backing. It is time
for a Labour Government to clear that up, stand up for working
people and bolster our labour market so that new technologies
that are already with us can be used to make work better for
everyone.
3.31pm
The Parliamentary Under-Secretary of State for Business and Trade
()
I am grateful to be called, Dame Maria, and it is a pleasure to
speak in the debate. I congratulate the hon. Member for
Birkenhead () on bringing this timely
subject forward. I thought it would be appropriate to type his
question into ChatGPT. I put in, “What is the potential impact of
AI on the labour market?” It said, “AI has the potential to
transform many aspects of the economy and society for the better.
It also raises concerns about job displacement and the future of
work.” That is it in a nutshell. It did not say that it was time
for a Labour Government.
Did the AI tell the Minister that the Conservative Government
have got everything right?
I have not actually posed that question, but perhaps I could
later.
This is an important debate, and it is important that we look at
the issue strategically. The Government and the Labour party
probably have different approaches: the Labour party’s natural
position on this kind of stuff is to regulate everything as much
as possible, whereas we believe that free markets have had a
tremendous effect on people’s lives right across the planet.
Whether we look at education, tackling poverty or child
mortality, many of the benefits in our society over the last 100
years have been delivered through the free market.
Our natural inclination is to support innovation but to be
careful about its introduction and to look to mitigate any of its
damaging effects, and that is what is set out in the national AI
strategy. As we have seen, it has AI potential to become one of
the most significant innovations in history—a technology like the
steam engine, electricity or the internet. Indeed, my hon. Friend
the Member for Folkestone and Hythe () said exactly that: this is
like a new industrial revolution, and I think it is a very
exciting opportunity for the future. However, we also have key
concerns, which have been highlighted by hon. Members today.
Although the Government believe in the growth potential of these
technologies, we also want to be clear that growth cannot come at
the expense of the rights and protections of working people.
Only now, as the technology rapidly improves, are most of us
beginning to understand the transformative potential of AI.
However, the technology is already delivering fantastic social
and economic benefits for real people. The UK’s tech sector is
home to a third of Europe’s AI companies, and the UK AI sector is
worth more than £15.6 billion. The UK is third in the world for
AI investment, behind the US and China, and attracts twice as
much venture capital investment as France and Germany combined.
As impressive as they are, those statistics should be put into
the context of the sector’s growth potential. Recent research
predicts that the use of AI by UK businesses will more than
double in the next 20 years, with more than 1.3 million UK
businesses using AI by 2040.
The Government have been supporting the ethical adoption of AI
technologies, with more than £2.5 billion of investment since
2015. We recently announced £100 million for the Foundation
Models Taskforce to help build and adopt the next generation of
safe AI, £110 million for our AI tech missions fund and £900
million to establish new supercomputer capabilities. These
exascale computers were mentioned in the Budget by my right hon.
Friend the Chancellor. These developments have incredible
potential to bring forward new forms of clean energy, and indeed
new materials that can deliver that clean energy, and to
accelerate things such as medical treatment. There are exciting
opportunities ahead.
If we want to become an AI superpower, it is crucial that we do
all we can to create the right environment to harness the
benefits of AI and remain at the forefront of technological
developments. Our approach, laid out in the AI White Paper, is
designed to be flexible. We are ensuring that we have a
proportionate, pro-innovation regulatory regime for AI in the UK,
which will build on the existing expertise of our world-leading
sectoral regulators.
Our regulatory regime will function by articulating five key
principles, which are absolutely key to this debate and tackle
many of the points that have been made by hon. Members across the
Chamber. Regulators should follow these five principles when
regulating AI in their sectors: safety, security and robustness;
transparency and explainability; fairness; accountability and
governance; and contestability and redress. That feeds into the
important points made by my hon. Friend the Member for Watford
(), who held this ministerial
position immediately prior to myself, about deception, scams and
fraud. We can all see the potential for that, of course.
Clearly, right across the piece, we have regulators with
responsibility in those five areas. Those regulators are there to
regulate bona fide companies, which should do the right thing,
although we have to make sure that they do. For instance, if
somebody held a database with inappropriate data on it, the
Information Commissioner’s Office could easily look at that, and
it has significant financial penalties at its disposal, such as
4% of global turnover or a £17 million fine. My hon. Friend the
Member for Watford made a plea for a Turing clause, which I am,
of course, very happy to look at. I think he was referring to
organisations that might not be bona fide, and might actually be
looking to undertake nefarious activities in this area. I do not
think we can regulate those people very effectively, because they
are not going to comply with anybody’s regulations. The only way
to deal with those people is to find them, catch them, prosecute
them and lock them up.
The Minister talks about safety, but does he agree that that has
to be safety by design, and not just having response mechanisms
built into the system so that a victim can appeal? I know he has
looked at fraud a lot in the past, and there is a presumption
that all will be done to combat fraud at its known source, rather
than just providing redress to victims.
That is absolutely right. We will not deal with everything in the
world of AI in this respect, but there needs to be overarching
responsibility for preventing fraud. That is something we have
committed to bringing forward in another legislative vehicle—the
Economic Crime and Corporate Transparency Bill, which is passing
through Parliament now—but I agree with my hon. Friend that there
should be a responsibility on organisations to prevent fraud and
not simply deal with the after-effects.
Our proposed framework is aligned with and supplemented by a
variety of tools for trustworthy AI, such as assurance
techniques, voluntary guidance and technical standards. The
Centre for Data Ethics and Innovation published its AI assurance
road map in December 2021, and the AI Standards Hub—a
world-leading collaboration led by the Alan Turing Institute with
the National Physical Laboratory and the British Standards
Institution—launched last October. The hub is intended to provide
a co-ordinated contribution to standards development on issues
such as transparency, security and uncertainty, with a view to
helping organisations to demonstrate that AI is used safely and
responsibly.
We are taking action to ensure that households, public services
and businesses can trust this technology. Unless we build public
trust, we will miss out on many of the benefits on offer. The
reality is that AI, as with other general-purpose technologies,
has the potential to be a net creator of jobs. I fully understand
the points raised by the hon. Member for Birkenhead—of course, we
do not want to see swathes of people put out of work because of
this technology. I hasten to add that that has never been the
case with other technologies. There have been many concerns over
the ages about how new technologies will affect jobs, but they
tend to create other jobs in different sectors. The World
Economic Forum estimates that robotics, automation and artificial
intelligence will displace 85 million jobs globally by 2025, but
create 97 million new jobs in different sectors, which I will
discuss in a second. I think the hon. Member for Birkenhead asked
in his speech whether I would be willing to meet him to discuss
these points; I am always very happy to do that, if we can
convene at another time.
The hon. Member also raised the point about how AI in the
workplace has the potential to liberate the workforce from
monotonous tasks such as inputting data or scanning through
documents for a single piece of information. I will address the
bigger concerns he has around that, but in the public sector it
would leave teachers with more time to teach, clinicians with
more time to spend with patients and police officers with more
time on the beat, rather than being behind a desk.
As was raised in a salient point by my hon. Friend the Member for
Folkestone and Hythe, AI also has tremendous potential in defence
and national security. That is absolutely critical. It was
interesting that leading people in the world of technology, led
by Elon Musk, recently wrote a letter asking for a six-month
pause while we look at how we can properly moderate the impacts
of AI. I am not sure that that is a good idea, because I am not
sure China and Russia would play that game. It is important that
we stay ahead of the curve, for exactly the reasons pointed out
by my hon. Friend.
The Minister is exactly right. That initiative also suggests that
AI is not yet here but, actually, the issues we have discussed
today exist already. We can look at them already; we do not need
a six-month pause to do that.
That is absolutely right. There is an opportunity but also a
potential threat. It is important that we continue to invest, and
it is great that the UK is ahead of the game in its investment,
behind only the US and China, which are obviously much bigger
economies.
The key thing is that we take action on skills, skilling up our
workforce in the UK to take advantage of the potential of AI.
Clearly, a good computing education is at the heart of that. We
have overhauled the outdated information and communications
technology curriculum and replaced it with computing, and
invested £84 million in the National Centre for Computing
Education to inspire the next generation of computer scientists.
Our national skills fund offers to do just that, with free level
3 qualifications for adults and skills bootcamps in digital
courses, including coding, AI and cyber-security, available
across England.
On that point, as well as the opportunities in AI, we need to
look at the new opportunities in the new economy. Some jobs will
be displaced, so we need to ensure that we are skilling up our
workforce for other opportunities in our new economy, be it data
science or green jobs with the green jobs taskforce. Recently, in
Hull, there were 3,000 new jobs in the wind turbine sector with a
starting salary of £32,000, which illustrates the potential for
green jobs in our economy. So although jobs might be displaced,
others, hopefully better-paid jobs will replace them. We want a
higher-wage, higher-skilled economy.
The Government are also supporting 16 centres for doctoral
training, backed by an initial £100 million, delivering 1,000
PhDs. We expanded that programme with a further £117 million at
the recent launch of the Government’s science and technology
framework. Last year, we invested an additional £17 million in AI
and data science postgraduate conversion courses and scholarships
to increase the diversity of the tech workforce, on top of the
£13 million that has been invested in the programme since
2019-20. We also invested £46 million to support the Turing AI
fellowships to attract the best and brightest AI talent to work
in the UK.
The point about protections for workers’ rights was raised by
many Members in the debate, not least the hon. Members for Gordon
() and for Birkenhead; the
shadow Minister, the hon. Member for Ellesmere Port and Neston
(); and my hon. Friends the
Members for Folkestone and Hythe and for Watford. It is important
to see the Government’s position on workers’ rights here. We are
bolstering workers’ rights, raising the national living wage,
with the highest increase on record—a near 10% increase—and six
private Members’ Bills that increase workers’ rights, including
on flexible working and other issues. There is also the
Employment (Allocation of Tips) Bill, which is the favourite Bill
of my hon. Friend the Member for Watford, who was its sponsor
prior to becoming the Minister.
On the concerns many raised about workplace monitoring, we are
committed to protecting workers. A number of laws are already in
place that apply to the use of AI and data-driven technology in
the workplace, including in decision making, which was raised by
the hon. Member for Ellesmere Port and Neston. The Equality Act
2010 already requires employers and service providers not to
discriminate against employees, job applicants and customers.
That includes discrimination through actions taken as a result of
an algorithm or a similar artificial intelligence mechanism.
Tackling discrimination in AI is a major strand of the Equality
and Human Rights Commission’s three-year strategy. Existing data
protection legislation protects workers where personal data is
involved, and that is one aspect of existing regulation on the
development of AI systems and other technologies.
Reforms as part of the Data Protection and Digital Information
Bill will cast article 22 of the UK GDPR as a right to specific
safeguards, rather than as a general prohibition on solely
automated decision making. These rights ensure that data subjects
are informed about, and can seek human review of, significant
decisions that are taken about them solely through automated
means, which was a point raised by the shadow Minister.
Employment law also offers protections. The Employment Rights Act
1996 provides that employees with two years of continuous service
are protected from unfair dismissal, which would encompass
circumstances where employees’ article 8 and UK GDPR rights have
been breached in the algorithm decision-making process that led
to the dismissal.
Of course, all good employers—by their very nature—should use
human judgment. The best way we can help employers in any
workplace is to have a strong jobs market where employers have to
compete for employees. That is the kind of market we have
delivered in this economy, despite some of the difficulties that
surround it.
I once again thank the hon. Member for Birkenhead for tabling
this timely and important debate. To be clear again, we have a
strong ambition for the UK to become a science and technology
superpower, and AI is a key part of that. However, the Government
recognise the concerns around these technologies and appreciate
that, as with all new technologies, trust has to be built. We
will continue to build our understanding of how the employment
rights framework operates in an era of increasing AI use. AI has
the potential to make an incredibly positive contribution to
creating a high-wage, high-skill and high-productivity economy. I
very much look forward to seeing the further benefits as matters
progress.
3.48pm
I thank Members for their contributions this afternoon, which
were eloquent and well put. It is good that we are bringing this
issue to the seat of power—the seat of Government—so that
Ministers understand our fears. While we embrace AI, there must
be built-in protections for people because not all employers are
good employers. There are some bad employers about who will take
advantage of AI. We need safeguards for workers and people being
replaced by machines. At the end of the day, this issue is coming
down our street, so we will need to revisit it again and
understand it better.
Question put and agreed to.
Resolved,
That this House has considered the potential impact of artificial
intelligence on the labour market.
|