Digital platforms face tougher hate speech and gen AI rules – Software


The government is proposing to introduce tougher regulations governing how digital platforms minimise and respond to hate speech and malicious use of generative AI, via a review of the Online Safety Act.



Communications Minister Michelle Rowland

The Act has been used to compel platforms to reveal more information about their measures for reducing illegal content and responses to users’ complaints about image-based abuse, cyber bullying and adult cyber abuse.

Communications Minister Michelle Rowland told the Press Club yesterday that its regulatory framework should be expanded to address emerging online harms like malicious use of generative AI and a spike in hate speech.

“What I have done today, in announcing consultation on updating the expectations is to draw hate speech into that rubric,” she said.

“This will be the first time under the BOSE – the Basic Online Safety Expectations – that we will have a requirement on the platforms to report against what they may have for their own systems and policies for regulating hate speech.”

BOSE does not mandate how platforms meet the expectations but does allow the eSafety Commssioner to issue fines of up to $787,000 if companies fail to report on their progress on measures like removing or reducing the discoverability of harmful content. 

Consultation for the review will be sought until February 16.

BOSE can be updated without requiring the government to pass legislation.

The review will be headed by former Australian Competition and Consumer Commission deputy chair Delia Rickard.

The current framework provides support to individual victims of online harm, including eSafety’s assistance in removing content, but Rowland said it should be expanded to have more of an emphasis on protecting targeted communities. 

“There is deep concern across the community about the way hateful language spreads online – including recent reporting about the rise in anti-Semitic and Islamophobic rhetoric,” she said.

“While the Online Safety Act provides protections for individuals who have been targeted by seriously harmful online abuse, there is no mechanism to address harmful abuse directed at communities on the basis of their religion or ethnicity.”

Rowland also said that the review would consider regulatory responses to AI-generated content “designed to humiliate, embarrass, offend – and even abuse – others.

“Under the proposed changes, services using generative AI would explicitly be expected to proactively minimise the extent to which AI can be used to produce unlawful and harmful material,” she said.

“This would cover, for example, the production of ‘deepfake’ intimate images or videos, class 1 material such as child exploitation or abuse material, or the generation of images, video, audio or text to facilitate cyber abuse or hate speech.”

The eSafety Commissioner also registered a voluntary industry code for search engines in September, which requires the sector to prevent its recently adopted AI capabilities from generating child abuse material. 

Rowland also announced that the government is “in the advanced stages” of signing “a new online safety and security memorandum of understanding with the UK,” which passed its own Online Safety Act this year.

“The review will consider whether regulatory approaches being adopted internationally – including the duty of care approach that is progressing in the UK – should be adopted in the Australian context.”



Source link