NIST and MITRE partner to test AI defense technology for critical infrastructure

NIST and MITRE partner to test AI defense technology for critical infrastructure

This audio is auto-generated. Please let us know if you have feedback.

The National Institute of Standards and Technology is partnering with a nonprofit research organization to study how AI can boost the security of critical infrastructure.

NIST on Monday announced that the agency and MITRE are creating an AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats to “drive the development and adoption of AI-driven tools” that can help security personnel fend off hackers intent on damaging or disabling power plants, hospitals and other infrastructure systems.

“NIST will work closely with MITRE by focusing on areas where collaborative development and pilot testing have the potential to demonstrate significant technology adoption impacts at the fast pace of innovation,” a NIST spokesperson told Cybersecurity Dive. “The goal of the AI accelerators is to help U.S. industry make smart choices about AI implementation.”

The agency said in its announcement that the economic security center, along with a parallel effort focused on manufacturing productivity, “will develop the technology evaluations and advancements that are necessary to effectively protect U.S. dominance in AI innovation, address threats from adversaries’ use of AI, and reduce risks from reliance on insecure AI.”

The two new AI centers are part of the Trump administration’s strategy for maintaining America’s competitive advantage in AI research and deployment at a time when China is increasingly asserting itself in the field. NIST said the new research operations would help implement the White House’s AI Action Plan, the security component of which focused on critical infrastructure protection.

NIST said it “expects the AI centers to enable breakthroughs in applied science and advanced technology and deliver disruptive innovative solutions to tackle the most pressing challenges facing the nation.”

Assuring reliable automation

NIST did not give examples of specific projects that the new AI security center would work on, but experts offered several ideas for the public-private partnership.

Nick Reese, the chief operating officer at the AI stress-testing company Optica Labs, said the center should explore ways to ensure the reliability of mission-critical systems that rely on AI models. Companies adopt AI to simplify data analysis and service delivery, he said, but “it is equally important to make sure that we are not only making decisions faster, but also more accurate.”

Other research is already focusing on ways to protect AI datasets and models from hackers, Reese said, so the new NIST center should go beyond those efforts. “The real impactful work will be in creating true AI assurance at the point where humans interact with the systems.”

“Right now, there is a real dearth of AI safety and assurance testing and benchmarking because most testing is done relative to model performance,” said Reese, a former director for emerging technology policy at the Department of Homeland Security. “NIST and MITRE have a real chance to expand the AI assurance space for the benefit of the delivery of critical services.”

Andrew Lohn, a senior fellow at Georgetown University’s Center for Security and Emerging Technology, agreed that increasing reliability should be a priority for AI security research, especially projects focused on enhancing critical infrastructure.

“AI can do impressive things, but … it is a lot less reliable than what we are used to demanding of our systems and components,” said Lohn, who wrote a paper on the subject in 2020.

Reliability issues shouldn’t disqualify AI from use in critical infrastructure, Lohn said, but they should spur research on ways to mitigate those issues. He drew an analogy to safety measures that mitigate human failings, such as speed bumps and airplane cockpit alarms.

“Today, our processes and standards are designed for infallible mechanical and electrical components combined with fallible human operators,” said Lohn, a former director for emerging technology policy at the White House’s National Security Council. “For AI to be useful, we need to understand how to design systems and standards around fallible electro-mechanical systems that do not always fail the way humans do.”

Critical infrastructure facilities have a lower tolerance for AI glitches than other businesses do, so the center will need to help develop technologies that either reduce or mitigate those failures.



Source link