CISOOnline

Bank regulator sounds warning over cybersecurity threat posed by AI models

As the technology spreads, threat actors will use similar models to uncover flaws more quickly and easily, potentially overwhelming the speed with which these can be addressed by today’s patching and remediation programs.

Governance not keeping up

Before drawing its conclusions, APRA had engaged with the industry, finding that governance was failing to keep up with the change in risk that AI is signaling. During that research, the letter said, “APRA observed a tendency to treat AI risk as ‘just another technology’. This misses key differences such as the distinct characteristics of predictive systems, adaptive behaviour in models, ethical considerations such as inherent bias, and privacy and data risks.”

The body identifies several areas for improvement. The biggest is the urgent need to more rapidly identify and remediate vulnerabilities, something that would require a major overhaul of current processes. Organizations also needed “robust security testing across AI‑generated code, software components, and libraries,” coupled with deeper assessment of major AI platforms and services.



Source link