House of Commons debate on Online Harms
The following is the full text of a debate in the House of Commons
today on Online Harms. [Relevant Documents: Online abuse and the
experience of disabled people, Petitions Committee, First Report of
Session 2017-19, HC 759 and the Government response, HC 2122; and
Oral evidence taken before the Petitions Committee on 21 May and 2
July 2020, on Tackling Online Abuse, HC 364.] Jeremy Wright
(Kenilworth and Southam) (Con) I beg to move, That this House
recognises the need to take...Request free trial
The following is the full text of a debate in the House of
Commons today on Online Harms.
[Relevant Documents: Online abuse and the experience of disabled people, Petitions Committee, First Report of Session 2017-19, HC 759 and the Government response, HC 2122; and Oral evidence taken before the Petitions Committee on 21 May and 2 July 2020, on Tackling Online Abuse, HC 364.]
Jeremy Wright (Kenilworth
and Southam) (Con) That this House recognises the need to take urgent action to reduce and prevent online harms; and urges the Government to bring forward the Online Harms Bill as soon as possible. The motion stands in my name and those of the hon. Member for Kingston upon Hull North (Dame Diana Johnson) and my hon. Friend the Member for Congleton (Fiona Bruce). I begin by thanking the Backbench Business Committee for finding time for what I hope the House will agree is an important and urgent debate. I am conscious that a great number of colleagues wish to speak and that they have limited time in which to do so, so I will be brief as I can. I know also that there are right hon. and hon. Members who wished to be here to support the motion but could not be. I mention, in particular, my hon. Friend the Member for Solihull (Julian Knight), the Chair of the Digital, Culture, Media and Sport Committee, who is chairing the Committee as we speak. I hope that today’s debate will largely be about solutions, but perhaps we should begin with the scale of the problem. The term “online harms” covers many things, from child sexual exploitation to the promotion of suicide, hate speech and intimidation, disinformation perpetrated by individuals, groups and even nation states, and many other things. Those problems have increased with the growth of the internet, and they have grown even faster over recent months as the global pandemic has led to us all spending more time online. Let me offer just two examples. First, between January and April this year, as we were all starting to learn about the covid-19 virus, there were around 80 million interactions on Facebook with websites known to promulgate disinformation on that subject. By contrast, the websites of the World Health Organisation and the US Centres for Disease Control and Prevention each had around 6 million interactions. Secondly, during roughly the same period, online sex crimes recorded against children were running at more than 100 a day. The online platforms have taken some action to combat the harms I have mentioned, and I welcome that, but it is not enough, as the platforms themselves mostly recognise.
Sir John Hayes (South Holland
and The Deepings) (Con) My right hon. and learned Friend is right to highlight the horror of degrading and corrupting pornography. Indeed, the Government have no excuse for not doing more, because the Digital Economy Act 2017 obliges them to do so. Why do we not have age verification, as was promised in that Act and in our manifesto? It is a straightforward measure that the Government could introduce to save lives in the way my right hon. and learned Friend describes.
Jeremy Wright Digital platforms should also recognise that a safer internet is, in the end, good for business. Their business model requires us to spend more and more time online, and we will do that only if we feel safe there. The platforms should recognise that Governments must act in that space, and that people of every country with internet access quite properly expect them to. We have operated for some time on the principle that what is unacceptable offline is unacceptable online. How can it be right that actions and behaviours that cause real harm and would be controlled and restricted in every other environment, whether broadcast media, print media or out on the street, are not restricted at all online? I accept that freedom of speech online is important, but I cannot accept that the online world is somehow sacred space where regulation has no place regardless of what goes on there. Given the centrality of social media to modern political debate, should we rely on the platforms alone to decide which comments are acceptable and which are unacceptable, especially during election campaigns? I think not, and for me the case for online regulation is clear. However, it must be the right kind of regulation—regulation that gives innovation and invention room to grow, that allows developing enterprises to offer us life-enhancing services and create good jobs, but that requires those enterprises to take proper responsibility for their products and services, and for the consequences of their use. I believe that that balance is to be found in the proposed duty of care for online platforms, as set out in the Government’s White Paper of April last year. I declare an interest as one of the Ministers who brought forward that White Paper at the time, and I pay tribute to all those in government and beyond, including the talented civil servants at the Department for Digital, Culture, Media and Sport, who worked so hard to complete it. This duty of care is for all online companies that deal with user-generated content to keep those who use their platforms as safe as they reasonably can.
Jim
Shannon (Strangford) (DUP)
Jeremy Wright I recognise that what I am talking about is not the answer to every question in this area, but it would be a big step towards a safer online world if designed with sufficient ambition and implemented with sufficient determination. The duty of care should ask nothing unreasonable of the digital platforms. It would be unreasonable, for example, to suggest that every example of harmful content reaching a vulnerable user would automatically be a breach of the duty of care. Platforms should be obliged to put in place systems to protect their users that are as effective as they can be, not that achieve the impossible. However, meeting that duty of care must mean doing more than is being done now. It should mean proactively scanning the horizon for those emerging harms that the platforms are best placed to see and designing mitigation for them, not waiting for terrible cases and news headlines to prompt action retrospectively. The duty of care should mean changing algorithms that prioritise the harmful and the hateful because they keep our attention longer and cause us to see more adverts. When a search engine asked about suicide shows a how-to guide on taking one’s own life long before it shows the number for the Samaritans, that is a design choice. The duty of care needs to require a different design choice to be made. When it comes to factual inquiries, the duty of care should expect the prioritisation of authoritative sources over scurrilous ones. It is reasonable to expect these things of the online platforms. Doing what is reasonable to keep us safe must surely be the least we expect of those who create the world in which we now spend so much of our time. We should legislate to say so, and we should legislate to make sure that it happens. That means regulation, and as the hon. Gentleman suggests, it means a regulator—one that has the independence, the resources and the personnel to set and investigate our expectations of the online platforms. For the avoidance of doubt, our expectations should be higher than the platforms’ own terms and conditions. However, if the regulator we create is to be taken seriously by these huge multinational companies, it must also have the power to enforce our expectations. That means that it must have teeth and a range of sanctions, including individual director liability and site blocking in extreme cases. We need an enforceable duty of care for online platforms to begin making the internet a safer place. Here is the good news for the Minister, who I know understands this agenda well. So often, such debates are intended to persuade the Government to change direction, to follow a different policy path. I am not asking the Government to do that, but rather to continue following the policy path they are already on—I just want them to move faster along that path. I am not pretending that it is an easy path. There will be complex and difficult judgments to be made and significant controversy in what will be groundbreaking and challenging legislation, but we have shied away from this challenge for far too long. The reason for urgency is not only that, while we delay, lives continue to be ruined by online harms, sufficient though that is. It is also because we have a real opportunity and the obligation of global leadership here. The world has looked with interest at the prospectus we have set out on online harms regulation, and it now needs to see us follow through with action so that we can leverage our country’s well-deserved reputation for respecting innovation and the rule of law to set a global standard in a balanced and effective regulatory approach. We can only do that when the Government bring forward the online harms Bill for Parliament to consider and, yes, perhaps even to improve. We owe it to every preyed-upon child, every frightened parent and everyone abused, intimidated or deliberately misled online to act, and to act now.
Mr Deputy Speaker (Mr Nigel Evans) 2.50 pm
Dame
Diana
Johnson (Kingston upon Hull North) (Lab) Three years ago, Parliament passed legislation to close this disastrous regulation gap. Three years on, the Government have still not implemented it. Assurances that the regulation gap will be filled by the forthcoming online harms legislation do not stand up to objective scrutiny. This is a child protection disaster happening now, and the Government could and, I hope, will act now. Children are being exposed to online pornography at an alarming scale, and during the covid-19 pandemic, there is no doubt that the figures will have increased even more with children more often having unsupervised online access. The issue is the widespread availability and severity of online pornography accessible at home. It is no longer about adult magazines on the top shelf in the newsagent. Contemporary pornography is also overwhelmingly violent and misogynistic, and it feeds and fuels the toxic attitudes that we see particularly towards women and girls. Back in 2017, Parliament passed part 3 of the Digital Economy Act. Enacted, it would prohibit commercial pornography websites from making their content available to anyone under the age of 18 and create a regulator and an enforcement mechanism. It was backed by the leading children’s charities, including the National Society for the Prevention of Cruelty to Children and Barnardo’s, as well as the majority of parents. However, in 2019, the Government announced that they would not be implementing part 3 of the 2017 Act. In the online harms White Paper in February, the Government said that any verification “will only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions”. That is not good enough. Parliament has already spoken. We have said what we want to happen. I expect the Government to build on part 3 of the 2017 Act. It is set out and is ready to go to. They should act on it now. 2.53 pm
Damian
Collins (Folkestone and Hythe) (Con) There are difficult decisions to be made in assessing what harmful content is and assessing what needs to be done, but I do not believe those decisions should be made solely by the chief executives of the social media companies. There should be a legal framework that they have to work within, just as people in so many other industries do. It is not enough to have an online harms regulatory system based just on the terms and conditions of the companies themselves, in which all Parliament and the regulator can do is observe whether those companies are administering their own policies. We must have a regulatory body that has an auditing function and can look at what is going on inside these companies and the decisions they make to try to remove and eliminate harmful hate speech, medical conspiracy theories and other more extreme forms of harmful or violent content. Companies such as Facebook say that they remove 95% of harmful content. How do we know? Because Facebook tells us. Has anyone checked? No. Can anyone check? No; we are not allowed to check. Those companies have constantly refused to allow independent academic bodies to go in and scrutinise what goes on within them. That is simply not good enough. We should be clear that we are not talking about regulating speech. We are talking about regulating a business model. It is a business model that prioritises the amplification of content that engages people, and it does not care whether or not that content is harmful. All it cares about is the engagement. So people who engage in medical conspiracy theories will see more medical conspiracy theories. A young person who engages with images of self-harm will see more images of self-harm. No one is stepping in to prevent that. How do we know that Facebook did all it could to stop the live broadcast of a terrorist attack in Christchurch, New Zealand? No one knows. We have only Facebook’s word for it, and the scale of that problem could have been a lot worse. The tools and systems of these companies are actively directing people to harmful content. People often talk about how easy it is to search for this material. Companies such as Facebook will say, “We downgrade this material on our site to make it hard to find,” but they direct people to it. People are not searching for it—it is being pushed at them. Some 70% of what people watch on YouTube is selected for them by YouTube, not searched for by them. An internal study done by Facebook in Germany in 2016, which the company suppressed and was leaked to the media this year, showed that 60% of people who joined Facebook groups that shared extremist material did so at the recommendation of Facebook, because they had engaged with material like that before. That is what we are trying to regulate—a business model that is broken—and we desperately need to move on with online harms. 2.56 pm
Chris Elmore (Ogmore)
(Lab) I pay tribute to the hon. Member for Folkestone and Hythe (Damian Collins) for not only his speech but his chairmanship of the DCMS Committee, which he did without fear or favour. He took on the platforms, and they did not like it. All credit to him for standing up for what he believes in and trying to take on these giants. In the two minutes I have left, I want to talk about the inquiry of my all-party parliamentary group on social media in relation to child harm, which the right hon. and learned Member for Kenilworth and Southam touched on. The Internet Watch Foundation is a charity that works with tech industries and is partly funded by them. It also works with law enforcement agencies and is funded by the Government and currently by the European Union. It removes self-generated images of child abuse. It removes URLs of children who have been coerced and groomed into taking images of themselves in a way that anyone in this House would find utterly disgusting and immoral. That is its sole, core purpose. The problem is extremely complex. The IWF has seen a 50% increase in public reports of suspected child abuse over the past year, but the take-down rate of URLs has dropped by 89%. I have pressed DCMS Ministers and Cabinet Office Ministers to ensure that IWF funding will continue, to address the fact that these URLs are not being taken down and to put more resources into purposefully tackling this abhorrent problem of self-generated harm, whether the children are groomed through platforms, live streaming or gaming. The platforms have not gone far enough. They are not acknowledging the problem in front of them. I honestly believe that if a future Bill provides the power for the platforms to decide what is appropriate and for Ofcom to make recommendations or fine them on that basis, it is a flawed system. It is self-regulation with a regulator—it does not make any sense. The platforms themselves say that it does not work. In closing, will the Minister please—please—get a grip on the issues that the IWF is raising, continue its funding, and do all that he can to protect children from the harm that many of them face in their bedrooms and homes across the UK? 2.59 pm
Fiona Bruce (Congleton)
(Con) We need to do so much more to protect children from being drawn into producing material themselves. There is growing concern about self-generated indecent images of children, made when a child is tricked or coerced into sending sexual material of themselves. I commend the work of my right hon. Friend the Member for Bromsgrove (Sajid Javid), who, with the Centre for Social Justice, has launched an investigation into child sexual abuse, and I commend his op-ed in The Sun on Sunday last week. It is not often that I commend something in The Sun, but in his op-ed he highlighted the increase in livestreamed abuse in which sex offenders hire traffickers in countries such as the Philippines to find children for them to violate via a video link. I also thank the International Justice Mission for its effective work in highlighting this despicable trade and consumption, in respect of which the UK is the world’s third largest offender. As the IJM says, we need to do more than highlight this; the Government need to improve prevention, detection and prosecution. Yes, we have made great strides as a country in detecting and removing child sexual abuse material from UK-hosted websites, but livestreamed abuse is not being detected or reported and much more needs to be done by tech companies and social media platforms to rectify the situation. Legislation must require them to act. For example, they could adopt a safety-by-design approach so that a camera cannot be flipped to face a child. Regulation of the online space is needed to ensure that companies take swift and meaningful action to detect the online sexual exploitation of children, and there must be more accountability for offenders who commit this abuse. We should not distinguish the actions of those offenders from the actions of those who prey on children in person. Every image depicts a real child being hurt in the real world. Communities of online offenders often ask for original videos and images as their price of admission, prompting further targeting and grooming of vulnerable children. The Government need to act urgently to help better to protect vulnerable children—indeed, all children—and to promote greater awareness, including through education. Children need to know that it is not their fault and that they can talk to someone about it, so that they do not feel, as so many teachers who have talked to Childline have said, “I can’t deal with this anymore. I want to die.” 3.02 pm
Stephen
Doughty (Cardiff South and Penarth) (Lab/Co-op) In that Westminster Hall debate, I spoke about the range of less well-known platforms that the Government must get to grips with—the likes of Telegram, Parler, BitChute and various other platforms that are used by extremist organisations. I pay tribute to the work that HOPE not Hate and other organisations are doing. I declare an interest as a parliamentary friend of HOPE not Hate and commend to the Minister and the Government its excellent report on online regulation that was released just this week. I wish to give one example of why it is so crucial that the Government act, and act now, and it relates to the behaviour of some of the well-known platforms. In the past couple of weeks, I have spoken to one of those platforms: YouTube—Google. It is not the first time that I have spoken to YouTube; I have previously raised concerns about its content on many occasions as a members of the Home Affairs Committee. It was ironic to be asked to take part in a programme to support local schools on internet safety and being safe online, when at the same time YouTube, despite my personally having reported instances of far-right extremism, gang violence and other issues that specifically affect my constituency, has refused to remove that content. YouTube has not removed it, despite my reporting it. I am talking about examples of gang videos involving convicted drug dealers in my constituency; videos of young people dripping in simulated blood after simulated stabbings; videos encouraging drug dealing and violence and involving young people as actors in a local park, just hundreds of metres from my own house—but they have not been removed, on grounds of legitimate artistic expression. There are examples of extremist right-wing organisations promoting hatred against Jews, black people and the lesbian, gay, bisexual and transgender community that I have repeatedly reported, but they were still on there at the start of this debate. The only conclusion I can draw is that these companies simply do not give a damn about what the public think, what parents think, what teachers think, what all sides of the House think, what Governments think or what the police think, because they are failing to act, having been repeatedly warned. That is why the Government must come in and regulate, and they must do it sooner rather than later. We need to see action taken on content relating to proscribed organisations—I cannot understand how that content is online when those organisations are proscribed by the Government—where there are clear examples of extremism, hate speech and criminality. I cannot understand why age verification is not used even as a minimum standard on some of these gang videos and violent videos, which perhaps could be justified in some parallel world, when age verification is used for other content. Some people talk about free speech. The reality is that these failures are leading to a decline in freedom online and in safety for our young people. 3.06 pm
Damian
Hinds (East Hampshire) (Con) When I was at the Department for Education, I heard repeatedly from teenagers who were worried about the effect on their peers’ mental health of the experience of these curated perfect lives, with the constant scoring of young people’s popularity and attractiveness and the bullying that no longer stops when a young person comes through their parents’ front door but stays with them overnight. I heard from teachers about the effect of technology on sleep and concentration and on taking too much time from other things that young people should be doing in their growing up. I take a lot of what will be in this legislation as read, so what I will say is not an exclusive list, but I have three big asks of what the legislation and secondary legislation should cover for children. By children, I mean anybody up to the age of 16 or 18. Let us not have any idea that there is a separate concept of a digital age of consent that is in some way different. First, the legislation will of course tackle the promotion of harms such as self-harm and eating disorders, but we need to go further and tackle the prevalence and normalisation of content related to those topics so that fewer young people come across it in the first place. Secondly, on compulsive design techniques such as autoplay, infinite scroll and streak rewards, I do not suggest that the Government should get in the business of designing applications, but there need to be natural breaks, just as there always were when children’s telly came to an end or in running out of coins at the amusement arcade, to go and do something else. Actually, we need to go further, with demetrification—an ugly word but an important concept—because children should not be worrying about their follower-to-following ratio or how many likes they get when they post a photograph. Bear in mind that Facebook managed to survive without likes up to 2009. Thirdly, we need to have a restoration of reality, discouraging and, at the very least, clearly marking doctored photos and disclosing influencers’ product placements and not allowing the marketing of selfie facial enhancements to young children. It is not only about digital literacy and resilience, though that plays a part. The new material in schools from this term is an important step, but it will need to be developed further. It has always been hard growing up, but it is a lot harder to do it live in the glare of social media. This generation will not get another chance at their youth. That is why, yes, it is important that we get it right, but it is also important that we get it done and we move forward now. 3.09 pm
Catherine McKinnell
(Newcastle upon Tyne North) (Lab) I want to address as well some of the most troubling material available online—material that has too often spilled over into the offline world with tragic consequences. From your internet browser today you could access video that shows graphic footage of real-event stabbings before alleging that the attack was, in fact, a Jewish plot. If you were so inclined, you could watch a five-hour-long video that alleges a Jewish conspiracy to introduce communism around the world—10,000 people already have. I could go on. These videos and others like it are easily discoverable on some of the so-called alternative platforms that have become safe havens for terrorist propaganda, hate material and covid-19 disinformation, so it is crucial that when the Government finally bring their online harms Bill forward, it has to have teeth. The White Paper proposes establishing a new duty of care for users, overseen by an independent regulator, making it clear that fulfilling a duty of care means following codes of practice. The Government have rightly proposed two statutory codes—on sexual exploitation and abuse and on terrorism. Will the Minister now commit to bringing forward another code of practice on hate crime and wider harms? Without such a code, any duty of care for users will be limited to what the site’s terms and conditions allow. Terms and conditions are insufficient, as the Government acknowledge; they can be patchy and poorly applied. The Antisemitism Policy Trust, which provides the secretariat to the all-party parliamentary group against antisemitism, which I co-chair, has produced evidence outlining how hateful online materials can lead to violent hate crime offline. A code of practice on hate crime, with systems-level advice to start-ups and minimum standards for companies will go some way towards creating a safer world. There is much more in the Bill that needs serious consideration, but as a minimum we need to see a code of practice for hate crime brought forward and given the same status as that for child sexual exploitation and abuse and terrorism, and I hope today that the Minister can give us some reassurance that this will be taken seriously. 3.12 pm
Karen Bradley (Staffordshire
Moorlands) (Con) There is no doubt that the internet can be a force for good. Over the past few months, we have all enjoyed the fact that we can keep in touch with family and friends. We can work from home. Even some people can participate in certain parts of our proceedings, although clearly not this debate. But the internet can be used for harm. In the limited time I have I want to make just two points. One is about the impact on children and the other is about advertising online. When I was the Secretary of State for Digital, Culture, Media and Sport, I initially took the idea to the then Prime Minister, my right hon. Friend the Member for Maidenhead (Mrs May), that we should have an internet safety strategy. That is what has become the online harms strategy. The internet safety strategy was born out of my work in the Home Office when I was the Minister for Preventing Abuse, Exploitation and Crime. It was so clear to me through the work that I did in particular on protecting children that the internet was being used to harm children. We have some great successes. The WePROTECT initiative, for example, which has had a real impact on removing pornographic images of children and child abuse online, is a great success, but we must never rest on our laurels. I ask my hon. Friend the Minister, who knows full well about all this, because he was with me when lots of this work was happening in the Department, to deal with the issue of age verification on pornography. I know that it does not resolve every issue. It is not going to solve every problem, but there was a message given to me by children time and again. If there was one thing they wanted to see stopped, it was access to pornography because that was what was fuelling the harm that they faced. Turning to advertising, I will share with the House that this weekend I will be cooking a beef brisket that I will be purchasing from Meakins butchers in Leek, and I will be putting on it a beef rub. Hon. Members may ask why I am telling them that. I am telling them that because I have been mithered for weeks by my 15 year-old son, who has seen such a beef rub on Instagram. He is not getting his advertising from broadcast media. He is getting his advertising from the internet and he is desperate to try a beef rub on beef brisket, and I will therefore make sure he does so over the weekend. |