Hugging Face Discloses Unauthorized Access To Spaces Platform


Hackers penetrated artificial intelligence (AI) company Hugging Face’s platform to access its user secrets, the company revealed in a blog post.

The Google and Amazon-funded Hugging Face detected unauthorized access to its Spaces platform, which is a hosting service for showcasing AI/machine learning (ML) applications and collaborative model development. In short, the platform allows users to create, host, and share AI and ML applications, as well as discover AI apps made by others.

Hugging Face Hack Exploited Tokens

Hugging Face suspects that a subset of Spaces’ secrets may have been accessed without authorization. In response to this security event, the company revoked several HF tokens present in those secrets and notified affected users via email.

“We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default,” Hugging Face said.

The company has not disclosed the number of users impacted by the incident, which remains under investigation.

Hugging Face said it has made “significant” improvements to tighten Spaces’ security in the past few days, including org tokens that offer better traceability and audit capabilities, implementing key management service, and expanding its systems’ ability to identify leaked tokens and invalidate them.

It is also investigating the breach with external cybersecurity experts and reported the incident to law enforcement and data protection agencies.

Growing Threats Against AI-as-a-Service Providers

Risks faced by AI-as-a-service (AIaaS) providers like Hugging Face are increasing rapidly, as the explosive growth of this sector makes them a lucrative target for attackers who seek to exploit the platforms for malicious purposes.

In early April, cloud security firm Wiz detailed two security issues in Hugging Face that could allow adversaries to gain cross-tenant access and poison AI/ML models by taking over the continuous integration and continuous deployment (CI/CD) pipelines.

“If a malicious actor were to compromise Hugging Face’s platform, they could potentially gain access to private AI models, datasets and critical applications, leading to widespread damage and potential supply chain risk,” Wiz said in a report detailing the threat.

One of the security issues that the Wiz researchers identified was related to the Hugging Face Spaces platform. Wiz found that an attacker could execute arbitrary code during application build time, enabling them to scrutinize network connections from their machine. Its examination revealed a connection to a shared container registry that housed images belonging to other customers, which the researchers could manipulate.

Previous research by HiddenLayer identified flaws in the Hugging Face Safetensors conversion service, which could enable attackers to hijack AI models submitted by users and stage supply chain attacks.

Hugging Face also confirmed in December that it fixed critical API flaws that were reported by Lasso Security.

Hugging Face said it is actively addressing these security concerns and continues to investigate the recent unauthorized access to ensure the safety and integrity of its platform and users.



Source link