The president of secure messaging app Signal has warned of the security implications of agentic AI, where artificial intelligence (AI) can access systems to help people achieve certain tasks.
In the “Delegated decisions, amplified risk” session at the United Nations’ AI for Good summit, Meredith Whittaker spoke about how the security of Signal and other applications would be compromised by agentic AI. She said the industry was spending billions on advancing AI and betting on developing powerful intermediaries.
As an example of the access that AI agents require, she said: “To make a booking for a restaurant, it needs to have access to your browser to search for the restaurant, and it needs to have access to your contact list and your messages so it can message your friends.
“Having access to Signal would ultimately undermine our ability at the application layer to provide robust privacy and security.”
Whittaker noted that for AI agents to do their jobs autonomously without user interaction, they require pervasive access at the root level to the user’s IT systems. Such access, as Whittaker pointed out, goes against cyber security best practices “in a way that any security researcher or engineer here knows is exactly the kind of vector where one point of access can lead to a much more sensitive domain of access”.
Another security risk of agentic AI is that old software libraries and system components may not be very secure. “When you give an agentic AI system access to so much of your digital life, this pervasive access creates a serious attack vector [to target] security vulnerabilities,” she warned.
The Signal messaging app, like other applications, runs at the application layer of the operating system, and is specifically designed not to use “root” access to avoid cyber security risks.
“The Signal messenger app you’re using is built for iOS or Android, or a desktop operating system, but in none of those environments will it have root access to the entire system. It can’t access data in your calendar. It can’t access other things,” said Whittaker.
When you give an agentic AI system access to so much of your digital life, this pervasive access creates a serious attack vector [to target] security vulnerabilities Meredith Whittaker, Signal
“The place where Signal can guarantee the type of security and privacy which governments, militaries, human rights workers, UN workers and journalists need is in the application layer,” she said.
But AI agents need to work around these security restrictions. “We’re talking about the integration of these agents, often at the operating system level, where they’re being granted permissions up into the application layer,” she warned.
For Whittaker, the way agents are being developed should be a concern for anyone whose applications run at the application layer in an operating system, which is the case for the majority of non-system applications.
“I think this is concerning, not just for Signal, but for anyone whose tech exists at the application layer,” she said.
She used Spotify as an example, saying it doesn’t want to give every other company access to all its data. “That’s proprietary information, algorithms it uses to sell you ads. But an agent is now coming in through a promise to curate a playlist and send it to your friends on your messaging app, and the agent now has access to all that data.”
Whittaker also warned governments of the risks they face when deploying an AI system to access geopolitically sensitive information, which makes use of an application programming interface (API) from one of the big tech providers.
“How is it accessing data across your systems? How is it pooling that data? We know that a pool of data is a honeypot and can be a tempting resource,” she said.
AI systems are probabilistic and draw on different sets of training data to derive a plausible answer to a user query.
“AI isn’t a magical thing,” Whittaker added. “It’s a handful of statistical models, and AI agents are usually based on a number of different types of AI models wrapped in some software.”
She urged delegates considering agentic AI to assess the data access these systems require and understand how they achieve their outcomes.
The way AI models are trained on enterprise content was the topic of a recent Computer Weekly podcast with Gartner analyst Nader Heinen, who discussed the need for access control within the AI engine so it can understand which datasets a user is authorised to see.
Heinen warned that unless such access control is built into the AI engine, there is a very real risk it will inadvertently reveal information to people who should not have access to this information.
One approach Heinen sees as a possible way to avoid internal data leakage is to deploy small language models. Here, different AI models are trained and deployed, based on subsets of enterprise data, which align with data access policies for categories of users.
Henein said such a policy can be both incredibly expensive and incredibly complex, but added: “It may also be the way forward for a lot of cases.”
The major AI providers also sell some of this technology to the defence sector. It is something one presenter at the AI for Good conference urged delegates to be wary of.
How the data these providers collect every time an AI API is used is something every business decision-maker and cyber security expert in the private and public sector needs to consider.