The federal government has been asked to regulate “high-risk” uses of artificial intelligence as part of a raft of measures recommended by a senate inquiry.
The report [pdf] from the Select Committee on Adopting Artificial Intelligence (AI) follows a consultation into the “opportunities and impacts for Australia arising out of the uptake of AI technologies”.
The committee primarily called for a “new whole-of-economy, dedicated legislation to regulate high-risk uses of AI” that is “supplemented by a non-exhaustive list of explicitly defined high-risk AI uses”.
These uses must include so-called “general-purpose” AI models, such as large language models (LLMs) such as ChatGPT.
Amid the committee’s 13 recommendations was a requirement for AI developers to be “transparent” about the use of copyrighted works in their training datasets, and use of such “works is appropriately licensed and paid for”.
The committee also called for the government to implement recommendations made last year in a review of the Privacy Act, in particular “an individual’s right…to request meaningful information about how automated decisions are made.”
Meanwhile, as states such as Queensland look to implement their own automated decision-making (ADM) guardrails, the committee called for a federal “legal framework covering ADM in government services”.
This, the report said, should be informed by the Attorney-General’s Department’s ongoing consultation into the use of ADM, after agreeing to 38 recommendations from a previous consultation last year.
Lastly, the report said the government should take a “coordinated, holistic approach to managing the growth of AI infrastructure in Australia.
First launched in March 2024, the inquiry began consulting with members of the public and industry in May with an original deadline date of September 19.
However, the committee’s reporting date was pushed to November 26 to allow it to “consider” the impact of generative AI on the federal election in the United States, held on November 5.