Ofcom Finalizes Online Child Safety Rules
The United Kingdom communications regulator Ofcom has finalized a comprehensive set of child safety rules under the Online Safety Act, ushering in what it calls a “reset” for how children experience the internet.
Announced Thursday, the new regulations require over 40 practical safeguards for apps, websites, and online platforms accessed by children in the UK. These range from filtering harmful content in social feeds to robust age checks and stronger governance requirements. The measures apply to platforms in social media, gaming, and search—any online service likely to be accessed by children under 18.
“These changes are a reset for children online,” said Dame Melanie Dawes, Ofcom’s Chief Executive. “They will mean safer social media feeds with less harmful and dangerous content, protections from being contacted by strangers and effective age checks on adult content. If companies fail to act they will face enforcement.”
The finalized Codes of Practice are the product of consultations with over 27,000 children, 13,000 parents, civil society organizations, child protection experts, and tech companies. The rules will be enforceable from July 25, 2025.
Algorithmic Filters, Age Assurance, and Governance
A key focus of the reforms targets personalized recommendation algorithms—often the pathway through which children are exposed to harmful content. Under the new rules, platforms using recommender systems must filter out harmful material from children’s feeds if they pose medium or high risks.
The rules also impose mandatory age assurance on the most high-risk services. Platforms must verify users’ ages with a high degree of accuracy, and if unable to do so, must assume children are present and provide an age-appropriate experience. In some cases, this may mean blocking children’s access entirely to certain content, features, or services.
In addition, all providers must maintain fast-action processes to quickly assess and remove harmful material once identified.
“These reforms prioritize safety-by-design,” said a UK-based child safety policy expert. “The burden is finally shifting onto platforms to proactively assess and mitigate risks, rather than waiting for harm to happen.”
Child Safety Rule: More Control, Better Support for Children
Beyond content moderation, the rules talk about giving children more control over their online environment. Required features include:
The ability to decline group chat invites.
Tools to block or mute accounts.
The option to disable comments on their own posts.
Mechanisms to flag content they do not wish to see.
Services must also provide supportive information to children who search for or encounter harmful material, including around topics like self-harm, suicide, or eating disorders.
Clear and accessible reporting and complaint tools are also mandatory. Ofcom requires platforms to ensure their terms of service are understandable to children and that complaints receive timely, meaningful responses.
Accountability at the Top
A standout requirement under the new framework is “strong governance.” Every platform must designate a named individual responsible for children’s safety, and senior leadership must annually review risk management practices related to child users.
“These aren’t just tech tweaks. This is a cultural shift in corporate responsibility,” said the child saffety policy expert. “They [Ofcom] are holding leadership accountable for keeping children safe.”
Also read: Australia Gives Online Industry Ultimatum to Protect Children from Age-Explicit Harmful Content
Enforcement, Deadlines, and What’s Next
Tech firms have until July 24, 2024, to finalize risk assessments for services accessed by UK children. From July 25, 2025, they must implement the measures outlined in Ofcom’s Codes—or demonstrate alternative approaches that meet the same safety standards.
Ofcom has the authority to issue fines or apply to the courts to block access to non-compliant sites in the UK.
The child safety measures build upon earlier rules introduced under the Online Safety Act to prevent illegal harms, such as grooming and exposure to child sexual abuse material (CSAM). They also complement new age verification requirements for pornography websites.
More regulations are expected soon. Ofcom plans to launch a follow-up consultation on:
Banning accounts found to have shared CSAM.
Crisis response protocols for real-time harms.
AI tools to detect grooming and illegal content.
Hash matching to prevent the spread of non-consensual intimate imagery and terrorist material.
Tighter controls around livestreaming, which presents unique risks for children.
“Children deserve a safer internet. This framework lays the foundation, but we’re not stopping here,” Ofcom said in a statement.
Resources for Parents and Children
To accompany the new regulations, Ofcom published guidance for parents, including videos and answers to common safety questions. It also launched child-friendly content explaining what changes children can expect in their favorite apps and platforms.
As the codes go before Parliament for final approval, stakeholders across the tech ecosystem will be watching closely. For many, this marks a critical test of how well regulatory bodies can compel tech giants to prioritize child safety over engagement metrics.
Related
Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.
Source link