Better governance is required for AI agents
AI agents are one of the most widely deployed types of GenAI initiative in organisations today. There are many good reasons for their popularity, but they can also pose a real threat to IT security.
That’s why CISOs need to be keeping a close eye on every AI agent deployed in their organisation. These might be outward-facing agents, such as chatbots designed to help customers track their orders or consult their purchase histories. Or, they might be internal agents that are designed for specific tasks – such as walking new recruits through an onboarding process, or helping financial staff spot anomalies that could indicate fraudulent activity.
Thanks to recent advances in AI, and natural language processing (NLP) in particular, these agents have become extremely adept at responding to user messages in ways that closely mimic human conversation. But in order to perform at their best and provide highly tailored and accurate responses, they must not only handle personal information and other sensitive data, but also be closely integrated with internal company systems, those of external partners, third-party data sources, not to mention the wider internet.
Whichever way you look at it, all this makes AI agents an organisational vulnerability hotspot.
Managing emerging risks
So how might AI agents pose a risk to your organisation? For a start, they might inadvertently be given access, during their development, to internal data that they simply shouldn’t be sharing. Instead, they should only have access to essential data and share it with those authorised to see it, across secure communication channels and with comprehensive data management mechanisms in place.
Additionally, agents could be based on underlying AI and machine learning models containing vulnerabilities. If exploited by hackers, these could lead to remote code execution and unauthorised data access.
In other words, vulnerable agents might be lured into interactions with hackers in ways that lead to profound risks. The responses delivered by an agent, for example, could be manipulated by malicious inputs that interfere with its behaviour. A prompt injection of this kind can direct the underlying language model to ignore previous rules and directions and adopt new, harmful ones. Similarly, malicious inputs might also be used by hackers to launch attacks on underlying databases and web services.
The message to my fellow CISOs and security professionals should be clear: rigorous assessment and real-time monitoring is as essential to AI and GenAI initiatives, especially agents handling interactions with customers, employees and partners, as it is to any other form of corporate IT.
Don’t let AI agents become your blind spot
I’d suggest that the best place to start might be with a comprehensive audit of existing AI and GenAI assets, including agents. This should provide an exhaustive inventory of every example to be found within the organisation, along with a list of data sources for each one and the application programming interfaces (APIs) and integrations associated with it.
Does an agent interface with HR, accounting or inventory systems, for example? Is third-party data involved in the underlying model that powers their interactions, or data scraped from the Internet? Who is interacting with the agent? What types of conversation is the agent authorised to have with different types of users, or they to have with the agent?
It should go without saying that where organisations are building their own, new AI applications from the ground up, CISOs and their teams should work directly with the AI team from the earliest stages, to ensure that privacy, security and compliance objectives are rigorously applied.
Post-deployment, the IT security team should have search, observability and security technologies in place to continuously monitor an agent’s activities and performance. These should be used to spot anomalies in traffic flows, user behaviours and the types of information shared – and to halt those exchanges abruptly where there are grounds for suspicion.
Comprehensive logging doesn’t just enable IT security teams to detect abuse, fraud and data breaches, but also find the fastest and most effective remediations. Without it, agents could be engaging in regular interactions with wrong-doers, leading to long-term data exfiltration or exposure.
A new frontline for security and governance
Finally, CISOs and their teams must keep an eye out for so-called shadow AI. Just as we saw employees adopt software-as-a-service tools often aimed at consumers rather than organisations in order to get work done, many are now taking a maverick, unauthorised approach to adopting AI-enabled tools without the sanction or oversight of the organisational IT team.
The onus is on IT security teams to detect and expose shadow AI wherever it emerges. That means identifying unauthorised tools, assessing the security risks they pose, and taking swift action. If the risks clearly outweigh the productivity benefits, those tools should be blocked. Where possible, teams should also guide employees toward safer, sanctioned alternatives that meet the organisation’s security standards.
Finally, it’s important to caution that just because interacting with an AI agent may feel like a regular human conversation, agents don’t have the human ability to exercise discretion, judgement, caution or conscience in those interactions. That’s why clear governance is essential, and users must also be aware that anything shared with an agent could be stored, surfaced, or exposed in ways they didn’t intend.
Source link