Rapid AI Adoption Causes Major Cyber Risk Visibility Gaps – Hackread – Cybersecurity News, Data Breaches, AI, and More

Rapid AI Adoption Causes Major Cyber Risk Visibility Gaps – Hackread – Cybersecurity News, Data Breaches, AI, and More

As software supply chains become longer and more interconnected, enterprises have become well aware of the need to protect themselves against third-party vulnerabilities. However, the rampant adoption of artificial intelligence chatbots and AI agents means they’re struggling to do this.

On the contrary, the majority of organizations are exposing themselves to unknown risks by allowing employees to access AI services and software packages that include AI integrations, with little oversight.

This revelation is one of the main findings of Panorays’ latest CISO Survey for Third-Party Cyber Risk Management, which revealed that 60% of CISOs rate AI vendors as “uniquely risky,” primarily due to their opaque nature.

Yet despite knowing how dangerous they are, only 22% of CISOs have established proper processes for vetting AI tech vendors, leading to potentially dangerous situations where employees may unwittingly leak sensitive information via the prompts they enter.

According to Panorays, this creates risks that traditional third-party vulnerability assessment tools cannot properly capture, meaning organizations have no real way of knowing the dangers they’re exposing themselves to.

Organizations Face New Risks with AI

The survey of 200 U.S. CISOs found that 62% see AI vendors as having a distinct risk profile compared to traditional third-party software vendors, with 9% describing them as “significantly different” and 53% saying they are “somewhat different.”

The problem with AI chatbots is that most are closed-source, which means their underlying code is proprietary. Consequently, security teams have little understanding of how chatbots process the data that’s fed into them. It also means there’s no easy way for organizations to properly audit them.

In addition, AI users often lack security awareness regarding chatbots, increasing the risk that they might unwittingly feed sensitive information such as corporate secrets and customer data into these models.

While there’s lots of uncertainty about how AI systems use the data fed into them, the anecdotal evidence regarding how it might later be exposed is not encouraging. One of the most infamous examples of this was a 2023 incident involving Samsung, which discovered that its employees had pasted proprietary code into ChatGPT as well as the minutes of confidential internal meetings involving senior executives.

In both cases, ChatGPT seemingly retained this data and used it for training to improve its underlying large language model, which means it could have informed output in response to later prompts. Prompt injections and prompt leaks have rarely been reported by LLM developers, but they have indeed been known to happen.

CISOs Aren’t Doing Enough

What’s most alarming is that CISOs seem to be doing little to address these risks. Despite knowing the dangers of AI chatbots, 52% of organizations still rely on the same general processes they use for vetting traditional third-party software vendors to onboard AI tools. Yet the unpredictable nature of AI chatbots in comparison to traditional software means that general-purpose onboarding is clearly ill-suited to the task. 

Panorays found that just 22% of CISOs have developed dedicated and documented policies for vetting AI tools, with 25% relying on informal or case-by-case evaluations, which might be more desirable, but still pose risks due to the lack of standardization.

Survey: Rapid AI Adoption Causes Major Cyber Risk Visibility Gaps

The worrying lack of proper processes for onboarding third-party AI tools is one of the main reasons why CISOs admit to having diminished visibility with regard to third-party vulnerabilities. The survey found just 17% of respondents claim to have “full visibility” into such threats, meaning that 83% are essentially unaware of just how large and expansive their organization’s threat surface really is.

That likely explains why 60% of CISOs said they’ve witnessed an increase in incidents stemming from third-party vulnerabilities over the last year.

If there’s one bright spot from the report, it’s that CISOs do at least recognize the need for a newer approach to onboarding AI, and there’s evidence to show that some larger organizations are getting it together. Breaking down the results, Panorays said 38% of companies with 10,000 or more employees have established AI-specific onboarding policies, compared to just 26% of organizations with between 5,000 and 9,999 employees, and only 10% of firms with less than 5,000 staff.

The findings underscore the evolving nature of the CISO’s role. AI tools have become extremely popular among enterprise workers because they’re so convenient, enabling faster decision-making and accelerated productivity.

Simply put, they make life easier and enable workers to get more done, and these benefits cannot easily be ignored. But they also put more pressure on CISOs, who must balance the integration of AI with more robust vetting and security measures to ensure compliance is maintained and sensitive data isn’t leaked.

Adoption Outpacing Policy

As Panorays notes, the findings suggest organizations are adopting AI tools faster than they can be secured, creating a dangerous visibility gap where risky models are being given access to all kinds of sensitive information without proper scrutiny.

Fortunately, CISOs do at least seem to recognize the urgent need for AI-specific onboarding policies, and implementing these will likely be one of their top priorities in the coming months.





Source link