Gov floats pile-on, GenAI, E2EE and VR laws – Software


Online abuse and illegal content, which platforms can be fined for enabling, could be broadened to include pile-on attacks and harms facilitated through VR, generative AI, end-to-end encryption (E2EE) and recommender systems.



Australia’s national content moderator could also get tougher powers to investigate and penalise social media platforms that fail to remove, reduce the discoverability or ability to upload prescribed materials, the government revealed.

Communications Minister Michelle Rowland said that an issues paper will be released in the first half of the year, while publishing the terms of reference [pdf] for the review of The Online Safety Act 2021.

Pile-ons, E2EE, GenAI, VR, recommender systems

The terms of reference included online abuse that Rowland has not previously said should be included in the eSafety Commissioner’s remit like “volumetric (pile-on) attacks” and harms facilitated by an extensive list of technologies.

“Additional arrangements…to address online harms raised by a range of emerging technologies including but not limited to… immersive technologies, recommender systems, end-to-end encryption [and] changes to technology models such as decentralised platforms,” will be considered.

In a statement accompanying the terms, Rowland said that “so much of modern life happens online which is why it is critical our online safety laws are robust and can respond to new and emerging harms.”

Adding end-to-end encryption (E2EE) to the list of harm-facilitating technologies comes two months after eSafety clashed with industry associations and privacy rights groups over whether E2EE services should have the same obligations to detect illegal material as other providers. 

The “online abuse of public figures and those requiring an online presence as part of their employment” was another subject of inquiry in the terms of reference.

Since commencing in January 2022, eSafety commissioner Julie Inman Grant has used the Act to enforce the disclosure of platforms’ measures for reducing illegal content and responding to users’ complaints about image-based abuse, cyberbullying and adult cyber abuse.

The terms also included online harms and issues Rowland previously flagged when announcing the review in November, such as extending the regime’s focus on removing content that individual end-users report to eSafety, and addressing systemic hate speech targeting communities.

She also said that the framework needed to be updated to better address the risks of generative AI. 

“Our laws can never be a set-and-forget, particularly as issues like online hate and deepfakes pose serious risks to Australian users,” she said.

Heavier fines

Amid concerns current penalties are too lenient to incentivise platforms to improve safety — X copped a $610,500 fine for not cooperating with a probe into anti-child abuse practices — Rowland said the statutory review would start a year earlier than the Act requires.

“The Albanese government has brought forward our review of the Online Safety Act to ensure the eSafety Commissioner has the right powers to help keep Australians safe,” she said. 

The terms of reference cover “whether penalties should apply to a broader range of circumstances; whether the current information gathering powers, investigative powers, enforcement powers, civil penalties or disclosure of information provisions should be amended; and whether the current functions and powers in the Act are sufficient to allow the commissioner to carry out their mandate.”

Rowland said that “Interested individuals, civil society groups and industry members are encouraged to share their views as part of the review process, with detail on the consultation to be shared in coming months.”

Former Australian Competition and Consumer Commission deputy chair Delia Rickard will lead the review, which is set to be complete on October 31 and tabled in parliament 15 sitting days after.



Source link