Melbourne-headquartered email hosting provider Fastmail has said that the eSafety Commissioner’s proposed industry standards, requiring message and online file-storage providers to scan content, pose risks to users’ privacy.
Credit: Fastmail
The two drafted standards for detecting, disrupting and removing illegal material cover a broad range of industries, including most messaging and email services [pdf], cloud storage services and most apps and websites [pdf].
eSafety accepted industry-written, enforceable standards for other sectors like search engines, but said it would register its own codes for the remaining sectors after rejecting associations’ standards that exempted end-to-end encryption (E2EE) services providers from obligations like detecting illegal content.
Fastmail CEO Bron Gondwana told iTnews that content scanning was a “well-meaning,” but not “technically good” solution.
“It has been our experience that government bodies often mandate well-meaning things that are not technically good solutions, as anybody who has clicked through hundreds of cookie consent popups in their life will be painfully aware. This is a concern with any legislation.”
eSafety has said that automated content-scanning technology like hash-matching can respect users’ privacy by only flagging illegal material.
“The technology doesn’t scan text in emails or messages, or analyse language, syntax, or meaning,” commissioner Julie Inman Grant said when releasing the drafts in November.
However, Gondwana said that any form of content scanning carries the risk of function creep.
“The privacy-preserving component, in particular, is very hard to guarantee if dishonest actors can put fingerprints into the ‘bad content’ database to track content which is legal but government actors want to know who is transferring it.”
Grant also said that another privacy guarantee was that the standards only require platforms to match users’ content with material that authorities have already verified is illegal, reducing the likelihood of legal material being erroneously flagged.
Hash-matching tools that detect verified, illegal material, such as Microsoft’s PhotoDNA, have lower false positive rates than tools that use machine learning to predict whether a user’s message, upload or post contains unverified, illegal content such as child sexual exploitation material (CSEM).
The standards would still allow – without requiring – platforms to detect unverified, illegal material; Meta already uses classifiers trained on verified, illegal material to detect, unverified illegal material.
Intermediary server-based and client-side scanning?
The eSafety commissioner would exempt individual E2EE service providers from the standards if they can prove it is not “reasonable… to incur the costs of taking action, having regard to the level of risk to the online safety of end-users.”’
E2EE providers would not have “to design systematic vulnerabilities or weaknesses into any of their end-to-end encrypted services,” eSafety has said; because it is “technically feasible” to scan content before it is encrypted.
Platforms could deploy “proactive, device-based detection tools operating at the point of transmission, rather than during transmission,” the commissioner’s ‘Updated Position Statement’ on E2EE [pdf] released in October suggested.
Alternatively, a government agency could manage “proactive detection tools that operate within dedicated ‘secure enclaves,’” the statement added.
“This proposal would involve an E2EE communication being sent from a user’s device and then checked for known CSEM.
“Such a system can be audited to ensure it fulfills only one function, and that no access is provided to the service provider or other third parties.”
Fastmail was not E2EE by default, Gondwana said, but the email hosting provider supports customers having the option to “implement their own end-to-end solutions on top of us.”
Gondwana said scanning content on a user’s device or an intermediary server still carried the same privacy risks.
“Both client-side scanning and ‘secure enclave’ scanning are technically sound approaches to content analysis — and the risks of them are the same that they always have been for these problems: “who controls the list of bad content” and “false positives / false negatives.”
The eSafety Commissioner’s proposals also drew the ire of E2EE service providers Mozilla, Signal, Proton and the Tor Network.
The companies signed an open letter against the standards coordinated by online civil rights advocacy groups Digital Rights Watch, Access Now, and the Global Encryption Coalition Steering Committee.
Digital Rights Watch head of policy Samantha Floreani told iTnews, “The eSafety commissioner has publicly stated that they do not expect services to undermine or weaken encryption, however, that isn’t reflected in the body of the standards.”
“We are calling for that intention to be clearly stated in the legal instrument to better protect the privacy, security and ultimately the safety of all internet users,” Floreani said.
“Client-side scanning enables monitoring of material that might otherwise never leave a user’s device, and in doing so pushes the reach of surveillance across the boundary between what is shared and what is private.
“Because this would happen at a population level, it creates dangerous capability for mass monitoring and surveillance.”