The rollout of the UK’s Online Safety Act in July 2025 was intended to create a safer digital environment for children through stricter age verification rules, tighter moderation standards, and stronger protections against harmful online content. However, early evidence suggests that many of the safeguards introduced under the legislation can still be bypassed with surprisingly simple tactics, including a fake moustache drawn with makeup.
Recent findings have raised concerns among parents, researchers, and digital safety experts about the effectiveness of current age verification systems. While the Online Safety Act has led to some improvements in children’s online experiences, critics argue that enforcement remains inconsistent and that many platforms are still vulnerable to manipulation.
One of the most widely discussed examples involved a 12-year-old boy who reportedly used an eyebrow pencil to create a fake moustache before facing a facial age estimation check. According to the report, the altered appearance convinced the system that he was 15 years old, allowing him to bypass restrictions designed for younger users. The incident has become a symbol of broader concerns about the reliability of AI-driven age-verification technologies.
Online Safety Act Faces Early Challenges
The Online Safety Act was introduced to strengthen online child protection measures by requiring platforms to implement stricter checks and reduce children’s exposure to harmful material. The legislation also aimed to improve reporting tools and create safer digital spaces for younger users.
Despite those goals, the report suggests that loopholes remain widespread. Children have reportedly been bypassing protection through several methods, including entering false birthdates, borrowing adult credentials, sharing accounts, and using VPN services. More advanced attempts have also involved spoofing facial recognition systems used in age verification processes.
Survey data cited in the findings revealed that nearly half of children believe current age verification systems are easy to evade. Around one-third admitted to bypassing these systems in recent months.

The fake moustache example particularly highlighted weaknesses in facial age estimation tools that rely heavily on visual indicators rather than stronger forms of identity confirmation. Experts argue that systems based primarily on appearance can be vulnerable to minor cosmetic changes, lighting adjustments, or camera manipulation.
Mixed Results Following Online Safety Act Rollout
Although concerns over age verification remain significant, the report noted that the Online Safety Act has produced some positive outcomes. Approximately half of the surveyed children said they were now seeing more age-appropriate content online. In addition, around 40% of both children and parents stated that the internet feels somewhat safer since the legislation came into effect.
Many children also appeared supportive of increased online protections. The findings showed that younger users generally approved of stricter platform rules, reduced interaction with strangers, and limitations placed on high-risk platform features.
Around 90% of children who noticed stronger moderation systems and improved reporting tools viewed those changes positively. Researchers said this indicates that many younger users are willing to engage with safer digital environments when protections are implemented effectively.
Still, the improvements have not been universal. Within just one month of new child protection codes being introduced under the Online Safety Act, nearly half of the children surveyed reported encountering harmful content online. This included violent material, hate speech, and body image-related content, all categories the legislation specifically aims to regulate.
Privacy Concerns Grow Around Age Verification
The expansion of age verification requirements has also triggered growing concerns over privacy and data security. More than half of the children surveyed said they had been asked to verify their age within a recent two-month period. These checks were reportedly common across major platforms, including TikTok, YouTube, Google services, and Roblox.
Many platforms now rely on technologies such as facial age estimation, government-issued identification checks, and third-party age assurance providers to comply with the Online Safety Act. While users generally described the systems as easy to complete, concerns remain about how sensitive data is collected, stored, and potentially reused.
Parents expressed unease about whether biometric information and identity documents submitted during age verification could later be retained by companies or accessed by government agencies. Those concerns have intensified calls for more centralized and privacy-focused verification systems instead of fragmented checks spread across multiple online services.
Experts argue that current approaches may not strike the right balance between child safety and personal privacy. They warn that if the weaknesses exposed by tactics like the fake moustache incident are not addressed, public trust in these systems could continue to decline.

