We have all been there- quickly clicking the “Accept” option on a long list of permissions to get a new app running or new software installed. However, new research from the firm Red Canary suggests this common habit can be a goldmine for hackers.
By examining how a legitimate app like ChatGPT connects to corporate accounts, researchers found that its permission request process can sometimes be used by hackers to sneak into a person’s private inbox.
The Contoso Case Study
Researchers didn’t just guess how this happens; they tracked a specific scenario on 2 December 2025. An employee at a firm called Contoso Corp, identified as [email protected], linked the ChatGPT app to their work account.
The app, which has a specific App ID of e0476654-c1d5-430b-ab80-70cbd947616a, was granted access within the company’s Entra ID environment, known as Tenant ID 747930ee-9a33-43c0-9d5d-470b3fb855e7.
For your information, this is done through a verification system called OAuth. It is the technology that lets you “Sign in with Google or Apple” on different websites without sharing your password.
In this instance, though, the user granted permissions via a service called Microsoft Graph. The key permission granted here was Mail.Read. According to researchers, this simple click meant the app had “access to read the emails” of the user. Because the request came from the IP address 3.89.177.26, it looked like a standard setup and didn’t trigger any immediate alarms.
The Invisible Security Gap
Most of us rely on security codes or multi-factor authentication to keep our accounts safe. But here is the catch: once a user gives non-admin consent to an app, those extra security layers are often bypassed as the app creates a Service Principal, which is a digital representative that stays logged in using a token.
According to researchers, this creates a quiet route into cloud email. Because the app stays logged in using this digital token, it can keep reading data in the background without ever asking for a password or security code again.
This is particularly risky because, as a standard setting, many users are allowed to approve these apps without needing a manager’s official permission.
How to Close the Door
This isn’t a reason to delete your AI tools, but IT teams need to stay sharp. These risks can be spotted by checking AuditLogs for two specific actions: Add service principal and Consent to application. These records basically show exactly who authorised the app.
If a rogue app is discovered, the fix is relatively quick. The research team explained that companies can “remove the consent grant” to instantly kill the app’s access.
This research is definitely a timely reminder that in the age of AI, the most important security tool is simply being careful about what we allow our apps to do behind the scenes.
(Photo by Emiliano Vittoriosi on Unsplash)


