Read Time: 5 minutes
UK watchdog Ofcom is set to gain the power to block access to online services that fail to do enough to protect children and other users.
The regulator would also be able to fine Facebook and other tech giants billions of pounds, and require them to publish an audit of efforts to tackle posts that are harmful but not illegal.
The government is to include the measures in its Online Harms Bill.
The proposed law would not introduce criminal prosecutions, however.
Nor would it target online scams and other types of internet fraud.
This will disappoint campaigners, who had called for the inclusion of both.
It will, however, allow Ofcom to demand tech firms take action against child abuse imagery shared via encrypted messages, even if the apps in question are designed to stop their makers from being able to peer within.
Digital Secretary Oliver Dowden told parliament the legislation represented “decisive action” to protect both children and adults online.
“A 13-year-old should no longer be able to access pornographic images on Twitter, YouTube will not be allowed to recommend videos promoting terrorist ideologies and anti-Semitic hate crimes will need to be removed without delay.”
And he said that the government would introduce secondary legislation to introduce criminal sanctions for senior managers”, if those changes did not come about.
Mr Dowden made a commitment to bring the bill before parliament in 2021, but it might not be until 2022 or later before it comes into force.
The Children’s Commissioner for England, Anne Longfield, said there were signs that new laws would have “teeth”, including strong sanctions for companies found to be in breach of their duties.
She welcomed the requirement on messaging apps to use technology to identify child abuse and exploitation material when directed to by the regulator.
“However, much will rest on the detail behind these announcements, which we will be looking at closely,” she added.
The TechUK trade association said “significant clarity” was needed about how the proposals would work in practice, adding that the “prospect of harsh sanctions” risked discouraging investment in the sector.
Will the new law tackle ‘fake news’ and conspiracy theories?
Alistair Coleman, BBC Monitoring
The Online Harms Bill concentrates its fire on the very real dangers posed by child sexual exploitation, terrorism, bullying and abuse, but also addresses “fake news” on social media.
There have been calls to tackle disinformation for years, especially after the issue was highlighted during the 2016 US presidential election and Brexit referendum campaigns.
This year alone, misleading content about Covid-19, vaccines and 5G mobile technology has resulted in real-life harm – mobile phone masts have been torched as a result of conspiracy theories, and some people around the world were convinced that Covid-19 posed no real threat, leading to illness and death.
The government’s consultation says that social media companies’ response to disinformation has been too patchy, exposing users to potentially dangerous content.
But the government itself has been criticised for what was described as “unacceptable” delays in publishing the Online Harms Bill, and now it’s unlikely to come into effect before 2022. When it does, content that could cause harm to public health or safety will fall under Ofcom’s powers.
With potentially harmful anti-vaccine posts spreading online, could both social media companies and the authorities have acted earlier?
The government claims the new rules will set “the global standard” for online safety.
Plans to introduce the law were spurred on by the death of 14-year-old Molly Russell, who killed herself after viewing online images of self-harm.
In 2019, her father Ian Russell accused Instagram of being partly to blame, leading ministers to demand social media companies take more responsibility for harmful online content.
“For 15 years or so, the platforms have been self-regulating and that patently hasn’t worked because harmful content is all too easy to find,” he told BBC Radio Four’s Today programme.
He added that unless top executives could be held criminally liable for their products’ actions then he did not believe they would change their behaviour.
In his parliamentary statement, Mr Dowden said he has asked the Law Commission “to examine how the criminal law will address” this sensitive area.
“We do need to take careful steps to make sure we don’t inadvertently punish vulnerable people, but we do need to act now to prevent future tragedies,” he said.
Under the proposals, Ofcom would be able to fine companies up to 10% of their annual global turnover or £18m – whichever is greater – if they refused to remove illegal content and/or potentially failed to satisfy its concerns about posts that were legal but still harmful.
Examples of the latter might include pornography that is visible to youngsters, bullying and dangerous disinformation, such as misleading claims about the safety of vaccinations.
In addition, Ofcom could compel internet service providers to block devices from connecting to offending services.
The regulator would be given ultimate say over where to draw the line and what offences would warrant its toughest sanctions.
But in theory, it could fine Instagram’s parent company Facebook $7.1bn and YouTube’s owner Google $16.1bn based on their most recent earnings.
The new regulations would apply to any company hosting user-generated content accessible to UK viewers, with certain exceptions including news publishers’ comments sections and small business’s product review slots.
So, chat apps, dating services, online marketplaces and even video games and search engines would all be covered.
However, the bigger companies would have added responsibilities. including having to publish regular transparency reports about steps taken to tackle online harms, and being required to clearly define what types of “legal but harmful” content they deem permissible.
In response, Facebook’s head of UK public policy Rebecca Stimson said that it has “long called for new rules to set high standards”, adding that regulations were needed “so that private companies aren’t making so many important
‘Clear and manageable’
The child protection charity NSPCC had wanted the law to go further by threatening criminal sanctions against senior managers.
While it is planned for the bill to mention the possibility, it would only be as a provision that would require further legislation.
“We set out six tests for robust regulation – including action to tackle both online sexual abuse and harmful content and a regulator with the power to investigate and hold tech firms to account with criminal and financial sanctions,” said the NSPCC’s chief executive Peter Wanless.
“We will now be closely scrutinising the proposals against those tests.”
A spokesman for the government said ministers wanted to see how well the initial set of powers worked before pursuing criminal action.
MoneySavingExpert.com founder Martin Lewis and the Mental Health Policy Institute have also campaigned for the law to give the watchdog new powers to crack down on scammers.