Suncorp ramps up AI safety work – Governance


Suncorp is set to work on more “comprehensive AI safety standards” that apply to its technology and business functions, particularly as it looks to expand generative AI beyond internal uses.



The insurer is recruiting for an ‘AI safety manager’ whose remit will be to “design and implement AI safety standards and guidelines” and to “operationalise [these] guidelines for in-house AI models and third-party applications.”

Suncorp flagged this direction last year, stating that it had ambitions to expand its use of generative AI technology.

At the time, it said it was modelling its governance and risk management structures on the federal government’s AI ethics principles.

CTO and executive general manager of AI transformation Priyanka Paranagama told Digital Nation that the company’s AI risk management and governance approach remains “broadly aligned to the federal government’s AI ethics principles, and also now to the voluntary AI safety standard, released by the Department of Industry last September.

“We are balancing the opportunities AI represents with the need to operate within a clearly defined risk appetite,” Paranagama said.

He noted that exposure of generative AI models or outputs to customers required additional de-risking work.

“We are working through what additional controls are needed as our GenAI use cases evolve, broadening from GenAI being only internally focussed to providing insights directly to our customers,” Paranagama said.

The AI safety manager position, according to the recruitment advertisement, is a “leadership role [that] provides an opportunity to drive innovation while safeguarding operational integrity in AI transformation initiatives.”



Source link