One Week of the Online Safety Act: Cyber Experts Weigh In
The conversation around the UK’s Online Safety Act has transformed over the past week. Since it came into force last Friday (25th July 2025), there has been a lot of public outcry, including a petition, which was signed by over 400,000 people, calling for The Act to be scrapped altogether. The UK government has since rejected this idea, with no sign of backing down. In parallel, consumers have scrambled to find work arounds. VPN usage spiked in the UK, with sign-ups to one service surging by more than 1400%. Many are also calling into question the security of the organisations and third-parties that are required to store such sensitive data too. Surprisingly, sites (not necessarily seen as ‘adult’) like Spotify are also asking for users to upload their ID too, which has left people asking where does it end?!
This is a story with many moving parts and things have snowballed over the past week. One could focus on (non-exhaustively) VPNs, the software supply chain security element of third-party ID verification sites or the idea behind its conception (child safety) and still not scratch the surface. Instead, The Gurus asked cybersecurity experts from across the industry to weigh in…
Brian Higgins, Security Specialist at Comparitech, on VPNs:
“One of the more alarming emerging trends is the almost immediate mission creep of this legislation. The VPN issue was always going to deflate the effectiveness of any age verification measures, in fact it’s rather worrying that those responsible seem quite so surprised by this development. But due to the wide-ranging wording of the content potentially covered by the Bill, legislative compliance is impacting platforms and users in far more draconian fashion than may be deemed reasonable. Spotify is one service which has dismayed users by requiring AV and a prominent UK actor recently found he could no longer access pictures of his own children when posted on Social Media by their mother.
Many more examples of the swingeing reach of this Bill will undoubtedly continue to arise so it’s no wonder people will look for work-arounds. Are Ofcom going to arrest everyone who uses a fake AI Drivers License to spoof their way on to Facebook or will they be too busy getting sued by the U.S. State Department. Only time will tell.”
Graeme Stewart, head of public sector at Check Point, on a potential VPN ban:
“The idea of banning VPNs puts the UK in the company of China, Russia, and Iran. That should tell you everything. The Government’s attempt to regulate online harm has backfired spectacularly. In trying to stop children seeing harmful content, they’ve driven tens – maybe hundreds – of thousands of people to adopt tools that make lawful interception near-impossible.
Worse still, they’ve outsourced enforcement to unaccountable third parties, relying on fragmented databases that offer no guarantee of security, legitimacy, or transparency. Evidence is already emerging of fake Google and ChatGPT-generated IDs being accepted. This isn’t enforcement – it’s become a bit of theatre.
Just look at the Tea App debacle – a live example of what happens when poor verification meets bad actors.
From a cybersecurity perspective, this is last-century thinking. And here’s the kicker: by using a VPN to protect yourself, you now risk being flagged as a person of interest.
You can’t claim to protect privacy while handing people’s most sensitive data to unregulated vendors.
People are turning to VPNs because they don’t trust the system – and who can blame them? These are the same tools protecting journalists, whistleblowers, and citizens from surveillance and abuse. Banning VPNs doesn’t fix the problem – it just punishes the public for not blindly trusting a system that keeps failing them.”
Lucy Finlay, Director, Secure Behaviour and Analytics at Redflags, on uploading IDs:
“The requirements for certain websites to verify age by uploading a live selfie or a copy of an ID opens a whole new avenue of attack for cyber criminals and privacy questions for policy makers. Firstly, it invites setting up malicious prompts for ID verification on compromised websites, funnelling sensitive data away from unsuspecting users, who are being conditioned not to question giving away their ID. This is an example of “sludge”, where a nudge is being used as a friction or barrier to accessing what you want, so people are instinctively acquiescing to this request rather than question its legitimacy. Except it’s now not just pressing “accept all” on annoying cookie pop-ups… it’s giving away your ID or facial data. Secondly, it creates data regulation and privacy headaches, as foreign companies are engaged to carry out the verification service for the websites. Lastly, these companies are likely to be subject to increased scrutiny from bad actors wishing to get their hands on a goldmine of IDs and kompromat-worthy material associated with the “sensitive” material they are viewing. Do these risks outweigh the benefits gained, given these verification checks can currently be bypassed by a simple VPN?”
Mayur Upadhyaya, CEO at APIContext, on going cold turkey:
“It’s incredibly difficult to put the genie back in the bottle. These platforms have been accessible for so long that viewing them has become a deeply embedded habit for many young people. Going cold turkey overnight won’t work, especially if the only alternative is technical enforcement. We’re already seeing a surge in free VPN use, which carries serious risks like malware, trackers, and compromised data. More concerning is the cultural divide this creates. When kids feel they have to hide their online behavior, it shuts down the open dialogue parents need to have. The intent behind the Online Safety Act is well meaning, but real change requires education, safer alternatives, and trust, not just technical restrictions.”
Chris Hauk, Consumer Privacy Advocate at Pixel Privacy, on the risks of an org that store IDs being targeted by hackers:
“While I applaud any action taken to protect minors while they are online, providing your personal data, including their Government IDs, to websites, particularly adult websites, is a bridge too far. Many adult websites are run by unsavoury individuals and groups, and turning over an image of an ID card could allow these criminal types to perform criminal activities using that information.
While VPNs are an excellent way to avoid these ID requirements by connecting to another city or country where ID is not yet required, there are rumblings that governments will soon consider banning the use of VPNs to do so. This is another step toward greater government control of the internet, and the ability to restrict what we can see on the internet.”
Even if a website that requires government ID to login is on the up and up, the information could be exposed in a data breach, meaning a user’s online activities could be exposed to their friends, families, and employers. This happened years ago in the 2015 Ashley Madison data breach, when customers of the extramarital “dating site” saw more than 60GB of user data be released.”
Anne Cutler, Cybersecurity Expert at Keeper Security, on a better way to protect the children:
“The Online Safety Act introduces complex safety obligations for digital platforms, including age verification, content moderation and data collection requirements aimed at protecting children. But in fulfilling these obligations, platforms are being asked to collect and store highly sensitive personal data, raising urgent questions around how securely this information is being managed – and whether the infrastructure behind these platforms is up to the task.
Content moderation, like that spelled out in the Online Safety Act, needs a security-first strategy to underpin these safety measures. This strategy should be laser-focused on preventing unauthorised access, and safeguarding against internal threats, third-party vendors and cybercriminals. As platforms move to meet their regulatory responsibilities and begin collecting the necessary data, it’s critical to identify and address the security infrastructure that supports them. Security must be integrated from the ground up – through robust access controls, privileged user management, encryption and breach detection.
Building long-term digital resilience also means investing in both safety and security education – not just for children, but for the adults who build, manage and secure these systems. Many children – and the adults around them – simply aren’t aware of how vulnerable their accounts and data are, or how to effectively protect them. Keeper’s Flex Your Cyber initiative, in collaboration with reputable cybersecurity partners (National Cybersecurity Alliance, KnowBe4 and CYBER.org) was created to close the knowledge gap in cybersecurity awareness, while also pushing for enterprise-grade security standards in the classroom and beyond. But education alone cannot carry the weight of regulatory compliance. Platform providers must prioritise security-by-design principles from day one, embedding access controls and monitoring systems that ensure user protection is always active, not just passive.
Such an approach is especially critical in a world where threats targeting children are becoming harder to detect. Children are engaging not just with difficult content, but with increasingly complex, AI-driven digital experiences. These interactions can expose them to new forms of harm – from hacked accounts and impersonation to emotionally manipulative chatbots. Without proper access controls, data encryption and breach monitoring, child-facing platforms – and the data they contain – remain soft targets for malicious actors.”
Note: This is a developing story.
Source link