After rejecting industry associations’ draft codes for filtering harmful content the eSafety Commissioner has said platforms need broader commitments to detect child abuse material.
The content moderation watchdog has “a strong expectation that industry commit, through the codes, a strong stance in relation to detection of that kind of material proactively,” eSafety Commissioner acting chief operating officer Toby Dagg told senate estimates this week.
On November 18 last year, eSafety received draft industry codes from associations like the Digital Industry Group, which represents platforms like Meta, Twitter and Google.
Then in December, eSafety released a damning report [pdf] on platforms’ technical limitations detecting and responding to child abuse content.
“Some of the largest cloud-hosted content like iCloud and OneDrive, were not scanning for child sexual abuse imagery,” eSafety Commissioner Julie Inman Grant told the committee.
“And so it really suggests to us when you think about all of the devices and handsets that are out there, and all the potential storage, that we don’t even know the scale and the scope of child sexual abuse [material] that’s existing on these mainstream services.
“The major companies that do have access to advanced technology —AI, video matching technologies, imaging clusters and other technologies — should be putting investment into these tools to make them more efficacious,” she said.
eSafety executive manager of legal research, marketing and communications Morag Bond added: “We made it clear to industry that we wanted to see that commitment to deploy technology to identify those images that have already been vetted as child sexual abuse material broader.”
On Monday, the Commissioner asked the associations to resubmit their draft codes for filtering the class 1A and 1B “harmful content,” and to address “areas of concern.”
The full text of the draft codes were not published.
Class 1 is content that would be refused classification by the National Classification Scheme, like child abuse and terror material.
The Commissioner intends to register the industry codes in March, and said that if the resubmitted codes did not include “improved protections” she could define codes independently.
“I have given specific feedback to each of the industry associations dealing with each code about where I think some of the limitations or the lack of appropriate community safeguards exist,” Grant told senators.
Under-investment in detection technology
Grant said her office’s investigation into technolgies used by seven platforms to detect child abuse material uncovered “some pretty startling findings.”
“By no means were any of these major companies doing enough,” she said.
“Some were doing shockingly little.
“We issued seven legal transparency notices to Microsoft, Skype, Apple, Meta, WhatsApp, Snap and Omegle,” the Commissioner said in response to a request for an update on Big Tech’s initiatives to stop live streaming of child abuse material.
“There was quite a bit of variation across the industry…the time to respond to child sexual abuse reports varied from four minutes for Snap to up to 19 days by Microsoft when Skype or Teams required review,” Grant said.
eSafety’s ‘basic online safety expectations: summary of industry responses to the first mandatory transparency notices’ report outlined the technologies available for online service providers to detect different forms of child abuse material and broke down which platforms were and were not deploying them.
The report evaluated the extent to which the platforms were detecting previously confirmed child abuse images and videos, new material containing child abuse images and videos, online grooming and the platforms’ responses to user reports.
The report stated that technology for identifying confirmed images, such as Photo DNA, is accurate and widely accesible.
“A ‘hash matching’ tool creates a unique digital signature of an image which is then compared against signatures of other photos to find copies of the same image. PhotoDNA’s error rate is reported to be one in 50 billion,” the report stated.
Services using hash matching technology for confirmed images included: OneDrive (for shared content), Xbox Live, Teams (when not end-to-end encyption, or E2EE), Skype messaging (when not E2EE) Snapchat’s discover spotlight and direct chat features, Apple’s iCloud email and Meta’s newsfeed content and messengers services (when not E2EE).
WhatsApp uses E2EE by default but PhotoDNA is applied to images in user profiles and user reports.
Services not using hash matching technology for images included: OneDrive (for stored content that is not shared), Snapchat’s snaps, Apple’s iMessage (E2EE by default)
The breakdown of services detecting confirmed videos with hash matching technology was largely the same except it had not been utilised by iCloud email.
Detecting new, unconfirmed images, video and live streams of child sex abuse is much more challenging, but the technology is available, the report said.
“This may occur through the use of artificial intelligence (‘classifiers’) to identify material that is likely to depict the abuse of a child, and typically to prioritise these cases for human review and verification.
“These tools are trained on various datasets, including verified child sexual exploitation material… An example of this technology is Google’s Content Safety API13 or Thorn’s classifier, which Thorn reports has a 99 percent precision rate.”
The only services using technology to detect new images were Meta’s Facebook, Instagram messenger, Instagram direct (when not E2EE), and WhatsApp.
eSafety’s report said none of the services it reviewed had deployed technology to detect live streaming of child sex abuse material except Omegle, which used Hive AI.
Safety tech company SafetoNet’s ‘SafeToWatch’ tool was given as an example of a solution that could be implemented to stop live-streaming of the material.
It provides ‘a real-time video threat detection tool… to automatically detect and block the filming and viewing of child sexual abuse material,” the report said.