The UK government will introduce an Online Safety Bill next year which could result in fines higher than the GDPR for companies that allow illegal content to be posted on their platforms.
The plans are nominally aimed at protecting children online by banning things like terrorist content, child sexual abuse material, and anything promoting suicide. Misinformation is also included, if it is deemed to cause major physical or psychological harm.
Regulator Ofcom will be given the power to fine companies up to 10% of global annual turnover or £18 million, whichever is higher, for serious transgressions. It will also be empowered to block such services if they choose not to comply, although it’s unclear exactly how.
So-called Category One companies — like Facebook, Twitter, TikTok and others with a major online presence and “high-risk features” — will face the most stringent requirements, although the majority of firms online fall into lower categories.
However, the law nevertheless places new requirements not only on social media giants but a swathe of online services including messaging, cloud storage, search engines, video games and online forums.
As some of these platforms run end-to-end encryption there are concerns over whether they will be effectively penalized for not being able to monitor content being disseminated by users.
Stephen Kelly, CEO of entrepreneur’s network Tech Nation, welcomed the proposals.
"Given our leadership in the application of ethics and integrity in IT, it should be no surprise that the UK is moving decisively to tackle online harms, one of the biggest and most complex digital challenges of our time,” he argued.
“Equally, it offers the UK the opportunity to lead a new category of tech, such as ‘safetech,’ building on our heritage of regtech and compliance, which already assure global markets and economies."
However, others were less confident. Adam Hadley, director of the Online Harms Foundation is reported as describing the plans as “at best ineffective and at worst counterproductive.”
“Creating onerous financial penalties on tech companies only incentivises overzealous removal of content, leading to content that is not illegal being removed and pushing conspiracy theorists on to self-owned underground platforms, where their views cannot be challenged or easily monitored,” he argued.
The plans are nominally aimed at protecting children online by banning things like terrorist content, child sexual abuse material, and anything promoting suicide. Misinformation is also included, if it is deemed to cause major physical or psychological harm.
Regulator Ofcom will be given the power to fine companies up to 10% of global annual turnover or £18 million, whichever is higher, for serious transgressions. It will also be empowered to block such services if they choose not to comply, although it’s unclear exactly how.
So-called Category One companies — like Facebook, Twitter, TikTok and others with a major online presence and “high-risk features” — will face the most stringent requirements, although the majority of firms online fall into lower categories.
However, the law nevertheless places new requirements not only on social media giants but a swathe of online services including messaging, cloud storage, search engines, video games and online forums.
As some of these platforms run end-to-end encryption there are concerns over whether they will be effectively penalized for not being able to monitor content being disseminated by users.
Stephen Kelly, CEO of entrepreneur’s network Tech Nation, welcomed the proposals.
"Given our leadership in the application of ethics and integrity in IT, it should be no surprise that the UK is moving decisively to tackle online harms, one of the biggest and most complex digital challenges of our time,” he argued.
“Equally, it offers the UK the opportunity to lead a new category of tech, such as ‘safetech,’ building on our heritage of regtech and compliance, which already assure global markets and economies."
However, others were less confident. Adam Hadley, director of the Online Harms Foundation is reported as describing the plans as “at best ineffective and at worst counterproductive.”
“Creating onerous financial penalties on tech companies only incentivises overzealous removal of content, leading to content that is not illegal being removed and pushing conspiracy theorists on to self-owned underground platforms, where their views cannot be challenged or easily monitored,” he argued.