IndustrialCyber

Australia’s CISC tightens cyber reporting rules to capture AI-driven incidents in critical infrastructure


Australia’s Cyber and Infrastructure Security Centre (CISC) outlined how regulatory obligations under the Security of Critical Infrastructure Act 2018 are designed to embed risk management, preparedness and resilience into the day-to-day operations of critical infrastructure owners and operators. The agency prescribed that cybersecurity incidents, including those involving AI, must be reported to the Department of Home Affairs under the SOCI Act, through the Part 2B Notification of Cyber Security Incident (NSCI) obligation, commonly referred to as Mandatory Cyber Incident Reporting (MCIR).

The NSCI obligation requires entities to disclose incidents that have a significant or relevant impact on critical assets, feeding into a consolidated national threat picture. That visibility enables government to respond in real time, whether through immediate operational support or by working with industry to strengthen baseline security and resilience over the longer term. The framework requires responsible entities to provide operational information, maintain risk management programs, and report cyber incidents, while also improving information sharing between industry and government to strengthen visibility of the national threat environment. 

The regime scales depending on asset criticality, with the most vital systems subject to additional oversight and enhanced cybersecurity obligations. These measures reflect a broader push to ensure that essential services remain resilient against a range of hazards, including cyber attacks, natural events and system failures, particularly given the interconnected nature of infrastructure where disruption in one sector can cascade across others and impact economic stability and national security. 

CISC observed that while AI has the potential to offer efficiency and cost benefits, it can also introduce new risks that require careful management through policy, research, governance, and cybersecurity. 

The agency provided de-identified examples of recent AI-related cybersecurity incidents involving critical infrastructure assets. In one case, a responsible entity confirmed unauthorised access to a network or device after a privileged internal staff member installed and used multiple AI Visual Studio Code extensions. These AI agents established connections to several external AI platforms while operating on a privileged database host, with activity traced back to 2025, creating a large review window and high volumes of logs to assess.

In a separate incident, a responsible entity reported that an employee uploaded sensitive documents to ChatGPT while performing contracted work. The material included confidential details about privileged users, such as contact information, access activity and identification numbers, raising concerns about data exposure and governance around the use of external AI tools.

High-level mitigation strategies for AI risks can be achieved by applying established cybersecurity practices, including those outlined in the Australian Signals Directorate Information Security Manual. These measures emphasise governance, secure system management and workforce awareness as core to reducing exposure.

At the leadership level, boards and executive committees are responsible for ensuring that artificial intelligence systems are secure, controllable, subject to human oversight and used in an ethical and accountable manner. This accountability sets the foundation for how AI is deployed and managed across critical environments.

From an operational standpoint, systems must be administered in a secure, accountable and auditable way, ensuring that all activities can be traced and verified. Secure configuration management is equally critical, requiring systems to be aligned with approved and maintained baselines, with attack surfaces and pathways reduced, and configurations continuously monitored and consistently enforced.

Only trustworthy software should be allowed to execute within these environments, meaning it must be supported, verified and explicitly authorised. Alongside technical controls, organisations must ensure that personnel receive ongoing cybersecurity awareness training tailored to their roles, access levels and the evolving threat landscape, including specific operational security considerations tied to AI use.

Recently, the CISC began industry consultation on a proposed package of targeted reforms to strengthen the Ministerial Directions powers under the SOCI, as part of a broader effort to sharpen government response capabilities during serious cyber incidents. The measures are designed to ensure authorities can act more decisively when critical infrastructure assets face threats that could trigger cascading disruptions across sectors and materially impact national security, economic stability, or essential services. 



Source link