Australia’s top content moderator has given Twitter, Google, TikTok, Twitch and Discord 35 days to outline how they’re detecting child abuse material and stopping their algorithms from amplifying it.
The platforms were yesterday served legal with notices “requiring them to answer tough questions.”
The questions eSafety Commissioner Julie Inman Grant said that she asked vary across the relevant sectors and the specific tech giants in them.
Grant wants to know what hash matching, classifiers and other AI is used in detecting the harmful content on social media and messaging providers.
She has also sought to overturn search engines’ long running tradition of withholding their “approach to indexing web pages,” and asked Twitter how it can enforce tougher compliance measure when it has culled its Australian workforce.
The companies face fines of $687,500 per day if they do not “comply with these notices from the eSafety Commissioner” by March 30, according to telecommunications minister Michelle Rowland.
The quick turnaround comes after the watchdog said it intended to register industry codes for censoring content displaying child abuse, terrorist material and extreme violence in March — and that it doesn’t need industry associations to commit to them first.
The warning accompanied her rejection of eight draft codes for censoring illegal content, which were proposed by associations like the Digital Industry Group Inc (DIGI) in November.
The codes will set how the basic online safety expectations are adhered to and enforced.
Cracking open content algorithms
Grant said that the questions she issued the five providers included “the role their algorithms might play in amplifying seriously harmful content.”
This is a step up from the transparency notices she sent in August last year: back then, Apple, Meta (including its WhatsApp operation), Microsoft (including Skype), Snap, and Omegle were only asked about detection technologies and responses to harmful content reports.
Grant elaborated more on her expectations around algorithmic transparency in a letter sent in early February to the association for companies like Google, TikTok, and Twitter: DIGI.
The letter told DIGI it was “unclear” how its proposed draft codes would “ensure ongoing investments to support algorithmic optimisation.”
It called for stronger commitments “to improve ranking algorithms following the review or testing envisaged, and/or expenditure in research and development in technology to reduce the accessibility or discoverability of class 1A [child abuse] material.”
“Providers must, at a minimum, make available information about its approach to indexing web pages and conduct regular performance testing of its algorithms,” the letter stated.
The public are in the dark about the extent of any commitments DIGI has made towards preventing members’ algorithms from amplifying child abuse material.
This is because, even as members of DIGI like Meta have called for the draft codes to be publicly released, eSafety has said the tech giants are more cooperative behind closed doors than in public debates.
Getting Twitter, Google, TikTok, Twitch and Discord to detail how their algorithms rank content may prove the most ambitious step in Grant’s regime change.
It is the intellectual property that the same platforms have fought hardest to keep secret since the Australian Competition and Consumer Commission launched its digital platforms inquiry in 2019.
Moreover, given the level of detail Grant demanded from the last set of companies she issued transparency notices, she will likely not accept surface-level responses about how Twitter, Google, TikTok, Twitch and Discord’s content ranking algorithms operate.
Detecting abuse content
Grant said the questions would determine if Twitter, Google, TikTok, Twitch and Discord “use widely available technology, like PhotoDNA, to detect and remove this material.”
PhotoDNA is one of many hash matching tools for identifying confirmed child abuse images. It creates a unique digital signature to compare against signatures of other photos, finding copies of the same image.
“What we discovered from our first round of notices sent last August to companies… is that many are not taking relatively simple steps to protect children,” Grant said.
Grant told senate estimates last week that the “variation across the industry” in their use of detection technologies and the fact that companies owning multiple platforms had rolled out effective solutions to some services but not others was “startling”.
Although Microsoft developed PhotoDNA, it has not been deployed to OneDrive, Skype and Hotmail.
The report also found considerable variation in platforms’ use of technologies to detect confirmed videos, new images and live streaming.
A key premise in eSafety’s argument is the strong positive correlation between how many of these forms of child abuse content the platform has installed technology to detect, and the number of reports that the platform has made to anti-child exploitation bodies.
WhatsApp, for instance, which has deployed technology to detect confirmed images and both confirmed and new videos, made 1.37 million content referrals to the US’s National Center for Missing and Exploited Children (NCME) in 2021.
iMessage, on the other hand, cannot identify any of these forms of content and only made 160 referrals to NCME during the same time frame.
Musk’s Australian staff cuts
Grant also singled out Twitter, saying “the very people whose job it is to protect children,” were culled when the company finished axing its Australian workforce in January.
“Elon Musk tweeted that addressing child exploitation was ‘Priority #1’, but we have not seen detail on how Twitter is delivering on that commitment,” Grant, who was herself Twitter’s Australian and South East Asian public policy director until 2016, said today.
The watchdog told a parliamentary inquiry on Monday that Twitter’s first respondents to harmful content detections in Australia – the staff both designing and enforcing Twitter’s compliance with the Basic Online Safety Expectations – were recently axed.
“One of the core elements of the basic online safety expectations is a broad user safety component,” eSafety acting chief operating officer Toby Dagg said at the inquiry into law enforcement capabilities in relation to child exploitation.
“We would say that adequately staffing and resourcing trust and safety personnel constitutes an obvious component of that particular element of the basic online safety expectations,” he added.