Eight nations are syncing their content restriction, user surveillance, corporate disclosure and other oversight powers aimed at mitigating online harms.
eSafety Commissioner Julie Inman Grant.
Australian, UK, French, Korean, South African, Fijian, Irish and Slovakian internet regulators want to make their risk assessments, investigations, research and enforcement actions more streamlined and collaborative.
“By mapping the similarities and differences in our regulatory remits, the network [of regulators] has identified opportunities in multiple areas to pursue coherence between our respective regimes,” the Global Online Safety Regulators Network said on Friday.
The signatories’ joint statement [pdf] focused on their technologies and legal instruments that they plan to standardise and support each other deploy; with few details on the content types or online behaviours considered “cross-border harm”.
The Australian eSafety commissioner Julie Inman Grant said that “as regulators, we face similar challenges: we’re national entities mandated to regulate a complex set of global harms involving companies principally domiciled offshore.”
Syncing content moderation
The coalition of regulators broke down their similar abilities to restrict what their users see, compel platforms to disclose their internal safety processes and proactively scan user generated content.
Coordinating their overlapping powers could overcome jurisdictional limitations to mitigate harmful content, the regulators said.
“Where there are instances of systemic non-compliance across jurisdictions, the network might consider working more closely on investigations and enforcement action.”
Besides France and the UK, all the regulators can “issue content removal and blocking notices”.
A “blocking notice”, or geofence, refers to restricting material from a specific area, which X successfully argued was consistent with eSafety’s order to restrict footage of the Wakeley church stabbing from its Australian users.
A “content removal” order is inclusive of an entire platform, which eSafety argued was required because content hosted on X’s US servers could still be viewed through virtual private networks (VPNs).
Google, Meta, TikTok and other foreign tech giants comply with several thousand global takedowns every year that go unchallenged.
Coordinating geofences won’t resolve VPN loophole
If the eight nations were to geofence content each time one regulator issued a takedown notice, it would only reduce the discoverability of the content, which the blocking architecture available within a single jurisdiction already can.
The absence of US-based authorities like the Federal Communications Commission (FCC) hinders the regulatory network from fixing the jurisdictional barriers to compelling companies like X to comply with the disclosure or content removal notices they issue them.
And even if the FCC were to join, it could not synchronise regulation with the networks’ other members without overturning section 230 of the US Telecommunications Act, and other laws reducing platforms’ liability for third party content.
In addition to the increased difficulty of coordinating bans on content that US institutions would be more likely to consider free speech, it would also have little impact on the collective strength of enforcement actions against content that it is less controversial to regulate.
The US has passed legislation earlier this month [pdf] bringing its reporting obligations and penalties for failing to meet them in line with the standards of the other nations in the network of foreign regulators, which reduces the incentive to encourage US membership even less.
Leveraging platforms with service restrictions?
Besides Fiji and South Africa, all the regulators can issue “service blocking or restriction orders”.
Last year when Turkey said that it would shut down X if it kept anti-Erdoğan Tweets and accounts online, Musk complied.
Former Twitter CEO Jack Dorsey said that he similarly caved on blocking accounts criticising the Indian government when faced with a shutdown threat: claims Prime Minister Narendra Modi denies.
Although threatening X by the number of users it stands to lose from an ISP-level block has been more effective than financial penalties in the past, and, if all the members of the network were to shut down X it would make a big impact, eSafety would have to make a very strong case to the Federal Court.
Defined and elusive harms
Grant said that the approach to “global collaboration” was aimed at “promoting a degree of alignment in objectives and outcomes” rather than “identical legal and regulatory frameworks.”
However, the approach is also consistent with regulators’ domestic strategies.
The regulators have two goals which relate to two categories of content that they regulate.
Firstly, the regulators are seeking to set precedents and shared capabilities to remove the loosely defined harmful material that it is in their remit to deem illegal.
Since the regulators gained stronger powers between 2022 to 2023, only X and smaller sites have continued to keep the content online.
The regulators define only the most egregious harms, which no platforms make freedom of speech arguments to keep online; “child sexual exploitation and abuse material,” which is mentioned in the only other position statement [pdf] released since the Network launched in 2022.
Platforms agree to remove the content but disagree with regulators about how.
The UK’s plans to automatically detect, block and report the content through device-based, or intermediary government-owned server-based content scanning have been met with threats to cease operating.
Apple’s submission to eSafety on the proposal said it “opens the door for bulk surveillance”.
“Such capabilities, history shows, will inevitably expand to other content types (such as images, videos, text, or audio) and content categories.”
Meta has refused to share automated detections made by its device-based AI but will blur the images as a concession.