University study warns of AI risks – Software


AI is seen as a business “panacea of efficiency and effectiveness”, but a group of academics from University of the Sunshine Coast is warning adopters not to ignore AI’s moral and technical risks.



The research, published in the journal AI and Ethics, warns that the downsides of businesses rushing into generative AI include privacy and security risks to the public, staff, and other stakeholders.

The university said the risks also include mass data breaches exposing third party information ingested during AI training, and even business failures because decisions were based on poisoned AI models.

“There are growing concerns that the race to integrate generative AI is not being accompanied by adequate guardrails or safety evaluations,” the study stated.

“The rapid adoption of generative AI seems to be moving faster than the industry’s understanding of the technology and its inherent ethical and cyber security risks,” co-author and UniSC lecturer in cyber Security Dr Declan Humphreys said.

“Organisations caught in the hype can leave themselves vulnerable by either over-relying on or over-trusting AI systems”.

The paper also noted that organisations can be exposed to higher error rates during their AI rollout, and many also lack understanding of how their proprietary data, when used to train a large language model (LLM), could be exposed to outsiders.

The paper stated that, “owing to a lack of critical thinking and analysis skills in the corporate sector, [generative AI] may result in both poor performance and embarrassing results.”

The authors offer a five-point checklist for companies looking at AI: they should practice secure and ethical AI model design; have a trusted and fair data collection process; implement secure data storage; follow ethical principles for ethical AI model retraining and maintenance; and upskill, train and manage their staff.



Source link