Ofcom publishes draft online child safety rules for tech firms


Social media firms must implement effective age-checks and “tame toxic algorithms” recommending harmful content to children, according to Ofcom’s new safety codes of practice.

Published by Ofcom on 8 April 2024, the children’s safety codes contain more than 40 measures designed to help social media firms comply with new legal obligations placed on them by the Online Safety Act (OSA) to protect under 18s using their services.

Passed in October 2023, the obligations placed on digital platforms by the OSA mean they must prevent children from accessing harmful or age-inappropriate content (such as pornography or posts promoting self-harm); enforce age limits and implement age-checking measures; and conduct risk assessments of the dangers posed to children by their services.

It also includes an obligation to remove illegal content quickly or prevent it from appearing in the first place.

Under the codes, Ofcom expects any internet services that children can access (including social media networks and search engines) to carry out robust age-checks; to configure their algorithms to filter out the most harmful content from these children’s feeds; and implement content moderation processes that ensure swift action is taken against this content.

“In line with new online safety laws, our proposed codes firmly place the responsibility for keeping children safer on tech firms. They will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age-checks so children get an experience that’s right for their age,” said Ofcom chief executive Melanie Dawes.

“Our measures – which go way beyond current industry standards – will deliver a step-change in online safety for children in the UK. Once they are in force, we won’t hesitate to use our full range of enforcement powers to hold platforms to account. That’s a promise we make to children and parents today.”

The draft codes also include measures to ensure tech firms compliance, including by having a named senior person accountable for compliance with the children’s safety duties; an annual senior-body review of all risk management activities relating to children’s safety; and an employee code of conduct that sets standards for employees around protecting children.

Ofcom said it will launch an additional consultation later this year on how automated tools, including artificial intelligence (AI), can be used to proactively detect illegal content and content most harmful to children, including previously undetected child sexual abuse material (CSAM) and content encouraging suicide and self-harm.

There is now an open consultation on the draft codes, with plans to finalise the drafts within a year and have those come into force three months after.

“The government assigned Ofcom to deliver the act and today the regulator has been clear: platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online,” said digital secretary Michelle Donelan. “Once in place, these measures will bring in a fundamental change in how children in the UK experience the online world.”

Tamsin Allen, a media partner at law firm Bindmans, said that Ofcom’s proposals are “an encouraging first step, but the proof of their commitment will be in their willingness to impose sanctions on big tech for compliance failures. The act gives them new powers to fine companies up to 10% of global turnover, huge sums which could finally penetrate big tech’s armour.”

Ongoing concerns

However, the OSA has caused concern among both tech companies and civil society organisations over certain provisions that could undermine encrypted communications, privacy and freedom of expression.

Privacy campaigners at Open Rights Group (ORG), for example, have said the implementation of age assurance systems – including photo-ID matching, facial age estimation, and reusable digital identity services – to restrict children’s access could inadvertently curtail individuals’ freedom of expression while simultaneously exposing them to heightened cyber security risks.

“Adults will be faced with a choice: either limit their freedom of expression by not accessing content, or expose themselves to increased security risks that will arise from data breaches and phishing sites,” said executive director Jim Killock.

“Some overseas providers may block access to their platforms from the UK rather than comply with these stringent measures. We are also concerned that educational and help material – especially where it relates to sexuality, gender identity, drugs and other sensitive topics – may be denied to young people by moderation systems.

“Risks to children will continue with these measures. Regulators need to shift their approach to one that empowers children to understand the risks they may face, especially where young people may look for content, whether it is meant to be available to them or not.”

Robin Tombs, CEO at biometrics firm Yoti, argued while there is “no one silver bullet when it comes to child safety”, effective age-checking tech will be an essential part in protecting children from accessing harmful content online.

“It is important that people are offered a choice in how they prove their age, to ensure age assurance is inclusive and accessible to all. Thankfully, Ofcom has recognised that facial age estimation, reusable digital identity services and photo-ID matching are all highly effective solutions,” he said.

On the risks to encryption, encrypted messaging and email services such as WhatsApp, Signal and Element have previously threatened to pull out of the UK if Ofcom requires them to install “accredited technology” to monitor encrypted communications for illegal content.

This is on the basis that section 122 of the act gives Ofcom powers to require technology companies to install systems that they argue would undermine the security and privacy of encrypted services by scanning the content of every message and email to check whether they contain CSAM.

While nothing in the draft codes mentions encrypted services, Ofcom said it will publish specific proposals later in 2024 on its use of “tech notices” to require services, in certain circumstances, to use accredited technologies to deal with specific types of illegal content.

“We expect this to cover guidance on how we would use these powers, our advice to government on the minimum standards of accuracy, and our approach to accrediting technologies,” it said.



Source link