MalwareBytes

If a fake moustache can fool age checks, is the Online Safety Act working?


A report based on a survey by the UK’s Internet Matters shows that much of the responsibility for managing the online safety of children still falls on families.

The Online Safety Act came into effect in July, 2025, and the report explores what has changed in the online lives of UK families since then.

We discussed in December 2025 whether the privacy risks of age verification outweighed the enhanced child protection. While the report shows some progress, it mostly provides “an early view of how the online landscape is changing, and crucially, where it is not.”

Around half of children say they now see more age-appropriate content, and roughly four in ten parents and children feel the online world has become somewhat safer.

The online world is as much a part of a child’s environment as the physical world is. And blocking the view to parts of that world is not taken lightly. Almost half of children think age checks are easy to bypass. About a third admit to doing so recently, using tactics from fake birthdates and borrowed logins to spoofed faces and, less commonly, VPNs.

“I did catch my son [12] using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.”

Yet 90% of children who noticed improved blocking and reporting saw this as a good thing. Their support for these safety features is pragmatic. They point to:

  • clearer rules
  • restricted contact with strangers
  • limits on high-risk functions

 They also rate these features as helpful in reducing exposure to harmful content and interactions.

But the system is not perfect. In the month after the child protection codes came into force, almost half of children reported some online harm, including violent, hateful, and body image-related content that should be covered by the Act’s protections.

The survey also revealed that age checks are now commonplace. Over half of children said they were asked to verify their age within a recent two-month window, often on major platforms like TikTok, YouTube/Google, and Roblox, on both new and existing accounts.

The technology is improving. Platforms use facial age estimation, government ID, and third-party age assurance apps, and these are usually easy for children to complete.

However, gains in protection come with unresolved and, in some cases, growing concerns around privacy and data use, especially around age verification and AI.

Parents are worried not just about what data is collected for age checks, but whether it will be stored or reused by government or industry. This has fueled calls for central, privacy-protective solutions rather than fragmented data collection across platforms.

Because age assurance systems are both intrusive (in terms of data) and often ineffective (easy workarounds, weak enforcement), the report suggests they may not yet provide a good safety-to-privacy trade-off from a family perspective.

Obviously, the survey also didn’t capture input from adults pretending to be children to gain access to child-only spaces, a risk that parents link directly to predatory behavior.

The authors conclude that the Online Safety Act has started to reshape children’s online environments, making safety features more visible and enabling more age‑appropriate experiences in some areas.

However, the Act has not yet produced a “step change.” Harmful content remains widespread, age‑assurance is patchy and easy to circumvent, and key concerns such as time spent online, AI risks, and persuasive design remain under‑regulated.


Browse like no one’s watching. 

Malwarebytes Privacy VPN encrypts your connection and never logs what you do, so the next story you read doesn’t have to feel personal. Try it free → 



Source link