The UK’s Online Safety Act explained: what you need to know

The UK’s Online Safety Act explained: what you need to know

The UK’s Online Safety Act became law in October 2023 with the aim to enhance online safety for all internet users, particularly children, by placing obligations on service providers that either host user-generated content or provide search engine functionality.

Under their new obligations, more than 100,000 companies – including social media platforms, online forums, messaging services and video-sharing sites – are required to proactively prevent their users from seeing illegal or harmful content. This includes assessing the risks of such content appearing on their platforms, implementing “robust” age limits for the accessing of certain content and quickly removing the offending content when it does appear.

Failure to comply with the OSA’s measures can result in significant penalties for service providers. Online harms regulator Ofcom, for example, has the power to impose substantial fines (10% of a company’s global revenue or £18m, whichever is higher), and may require payment providers or advertisers to stop working with the platform.

Senior managers for online platforms can also face criminal liability for failing to comply with Ofcom’s information requests, or for not ensuring their company adheres to child safety duties, while the regulator itself can also conduct audits and direct companies to take specific steps to improve their online safety measures.

How Ofcom would regulate the act was set out in its December 2024 Illegal Harms Codes and Guidance, which went into effect and became enforceable on 17 March 2025. Under the codes, Ofcom expects any internet services that children can access (including social media networks and search engines) to carry out robust age checks, to configure their algorithms to filter out the most harmful content from these children’s feeds and implement content moderation processes that ensure swift action is taken against this content.

However, since its inception, the OSA has faced a number of criticisms, including for vague and overly broad definitions of what constitutes “harmful content”, and the threat it poses to encrypted communications.

There has also been extensive debate about whether the OSA is effective in practice, particularly since age verification measures went live in late July 2025 that require platforms to verify users’ ages to access certain content or sites, and in the wake of the 2024 Southport Riots where online misinformation played a key role in the spread of violence.

Age verification measures

Since 25 July 2025, online service providers have been required to put in place age checks to ensure children are unable to access pornography, self-harm, suicide or eating disorder-related content that could be harmful to them.

The plans for “robust age checks” were outlined in Ofcom’s May 2024 draft online child safety rules, which contained more than 40 other measures tech firms would need to implement by 25 July to comply with their new legal obligations under the act.

While much of the media focus since the deadline has been on the age-gating of porn sites, the change has also affected social media firms, dating apps, live streamers and some gaming companies.

The methods these services can use to assure people’s ages are varied, and can include facial age estimation technologies, open banking, photo-ID matching, digital identity services or credit card checks. However, since the age gate deadline on 25 July, online searches for virtual private networks (VPNs) – which encrypt a user’s connection to the internet, allowing them to bypass the OSA’s measures – have skyrocketed, with Proton alone reporting a 1800% spike in daily sign-ups for its VPN service in the UK, and VPN apps topping the Apple store’s download charts.

The Age Verification Providers Association (AVPA), on the other hand, said there has also been a sharp increase in additional age checks in the UK since age gating was introduced, with five million additional checks being carried out a day since then.

As it stands, the OSA places no limits on age verification providers from distributing, profiling or monetising the personal data of UK residents going through verification, although Ofcom notes on its website it may refer providers to the data regulator if it believes an age verification company has not complied with data protection law.

Some internet users have expressed frustration that the choice of which age assurance technology to use lies solely with the platform, meaning to access its services they must hand over their sensitive personal data to a third party. While these firms are subject to UK data protection law, it is unclear how the OSA measures around age verification will interact with the Data Use and Access Act’s (DUAA) new “purpose limitation” rules that make it easier to process data outside of its originally intended use.

The DUAA will also remove current protections against automated decision-making (ADM) so that they only apply to decisions that either significantly affect individuals or involve special category data, and introduce a list of “recognised legitimate interests” that organisations can use to process data without the need to conduct legitimacy assessments, which includes matters such as national security, prevention of crime and safeguarding.

There are also concerns with the OAS that political content is being censored in the name of protecting children, with reports of Palestine-related content being placed behind age verification walls on X and Reddit. Other reported examples of legitimate speech being removed as a result of age-gating at scale include users being unable to access content related to Alcoholics Anonymous and other addiction support, medical cannabis, the war in Ukraine, and even images of historical art, such as Francisco de Goya’s 19th-century painting Saturn Devouring His Son.

Some civil society groups and academics have also expressed concern that Ofcom’s guidance on the OSA so far incentivises platforms to adopt a “bypass strategy”, whereby they are encouraged to moderate content in ways that are more restrictive than necessary to avoid potential fines. This approach could lead to the over-removal of legitimate speech while potentially restricting users’ freedom of expression. 

Breaking encryption

Aside from age verification, the most controversial aspect of the act is power it gives to Ofcom to require tech firms to install “accredited technology” to monitor encrypted communications for illegal content. In essence, this would mean tech companies using software to bulk-scan messages on encrypted services (such as WhatsApp, Signal and Element) before their encryption, otherwise known as client-side scanning (CSS).

Implementing such measures would undermine the security and privacy of encrypted services by scanning the content of every message and email to check whether they contain illegal content. This has been repeatedly justified by the government as necessary for stopping the creation and spread of child sexual abuse materials (CSAM), as well as violent crime and terrorism. Cryptographic experts, however, have repeatedly argued that measures mandating tech firms to proactively detect harmful content through client-side scanning should be abandoned.

A policy paper written by Ross Anderson, a Cambridge University professor of security engineering, and researcher Sam Gilbert in October 2022, for example, argued that using artificial intelligence (AI)-based scanning to examine the content of messages would raise an unmanageable number of false alarms and prove “unworkable”. They further claimed the technology is “technically ineffective and impractical as a means of mitigating violent online extremism and child sexual abuse material”.

A previous October 2021 paper from Anderson and 13 other cryptographic experts, including Bruce Schneier, argued that while client-side scanning “technically” allows for end-to-end encryption, “this is moot if the message has already been scanned for targeted content. In reality, CSS is bulk intercept, albeit automated and distributed.”

In September 2023, BCS, The Chartered Institute for IT, said the government proposals in on end-to-end encryption are not possible without creating systemic security risks and, in effect, bugging millions of phone users.

It argued that the government is seeking to impose a technical solution to a problem that can only be solved by broader interventions from police, social workers and educators, noting that some 70% of BCS’ 70,000 members say they are not confident it is possible to have both truly secure encryption and the ability to check encrypted messages for criminal material.

The proposals have also led to a backlash from encrypted messaging providers, including WhatsApp, Signal and Element, which threatened to withdraw their services from the UK if the bill becomes law.

As it stands, while Ofcom does have the power to compel companies to scan for child sexual abuse material in encrypted environments, it is still working on guidance for tech firms around how “accredited technologies” such as client-side scanning and hash-matching can be implemented to protect child safety online.

There are currently no “accredited technologies” that Ofcom requires companies to use, with final guidance on the matter planned for publication in Spring 2026.

Online disinformation persists

Although the bill eventually received royal assent in October 2023 – four-and-a-half years after the online harms whitepaper was published in April 2019 – its ability to deal with real-world disinformation is still an open question. In May 2025, for example, the government and Ofcom were still in disagreement in over whether the act even covers misinformation.

As part of its inquiry into online misinformation and harmful algorithms, a report from the Commons Science, Innovation and Technology Committee (SITC) published a report of its findings in July 2025, outlining how the OSA fails to deal with the algorithmic amplification of “legal but harmful” misinformation.

Highlighting the July 2024 Southport riots as an example of how “online activity can contribute to real-world violence”, the SITC warned that while many parts of the OSA were not fully in force at the time of the unrest, “we found little evidence that they would have made a difference if they were”.

It said this was due to a mixture of factors, including weak misinformation-related measures in the act itself, as well as the business models and opaque recommendation algorithms of social media firms.

“It’s clear that the Online Safety Act just isn’t up to scratch,” said SITC chair Chi Onwurah. “The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn’t cross the line into illegality.

“Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable. To create a stronger online safety regime, we urge the government to adopt five principles as the foundation of future regulation.”

These principles include public safety, free and safe expression, responsibility (including for both end users and the platforms themselves), control of personal data and transparency.

Development hell

While controversies around certain aspects of the act are still ongoing, its process of becoming legislation was also fraught with tension, running through many iterations since the UK government initially published its Online Harms Whitepaper in April 2019.

Announcing the new measures, the then prime minister Theresa May argued these companies “have not done enough for too long” to protect their users, particularly young people, from “legal but harmful” content.

Although this was the world’s first framework designed to hold internet companies accountable for the safety of those using their services, and outlined proposals to place a statutory “duty of care” on internet companies to make them accountable for the safety of their users, it did not receive Royal Assent to become an act until October 2023.

While the government published an initial response to its whitepaper in February 2020 and a full response in December 2020 which provided more detail on the proposals, an initial draft of the bill did not materialise until May 2021.

At this stage, the draft bill contained a number of new measures, such as specific duties for “Category 1” companies – those with the largest online presence and high-risk features, which is likely to include Facebook, TikTok, Instagram and Twitter – to protect “democratically important” content, publish up-to-date assessments of their impact on freedom of expression, and new criminal liability for senior managers.

Further additions to the bill came in February 2022, when the government expanded the list of “priority offences” that tech companies will have to proactively prevent people from being exposed to. While terrorism and child sexual abuse were already included in the priority list, the government has redrafted it to include revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, the promotion or facilitation of suicide, people smuggling and sexual exploitation. As it stands, there currently are more than 130 priority offences outlined in the act.

In November 2022, the “legal but harmful” aspect of the bill – which attracted strong criticism from Parliamentary committees, campaign groups and tech professionals alike – was then dropped, meaning companies would no longer be obliged to remove or restrict legal content, or otherwise suspend users for posting or sharing that content. Instead, the measures around “legal but harmful” were reduced to only apply to children.

However, controversy continued – in January 2023, the then-Conservative government attempted to amend the bill so that existing immigration offences would be incorporated into the list of “priority offences”, meaning tech companies could be forced to remove videos of people crossing the English Channel “which show that activity in a positive light”. “Unlawful immigration” content is still included in the act’s list of priority offences.

Throughout this entire process, the bill attracted strong criticism. The Open Rights Group and other civil society organisations, for example, called for its complete overhaul in September 2022, on the basis that its measures threaten privacy and freedom of speech.

They specifically highlighted concerns around the act’s provisions to compel online companies to scan the content of users’ private messages, and the extensive executive powers granted to the secretary of state to define what constitutes lawful speech.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.