OTSecurity

Lawmakers open inquiry into cybersecurity risks posed by PRC-origin AI models deployed in critical infrastructure systems


The U.S. House Committee on Homeland Security and the House Select Committee on China launched a joint investigation into national security and cybersecurity risks tied to increased use of AI models developed in China, including low-cost, open-weight, and API-accessible systems, such as DeepSeek, Alibaba, Moonshot AI, and MiniMax. Lawmakers are examining concerns that some China-based AI providers may be distilling capabilities from leading U.S. models without authorization and repackaging them into cheaper systems that may lack equivalent safety controls, before making them available to American users and organizations.

As an initial step in the probe, Andrew R. Garbarino, a New York Republican and chairman of the House Homeland Security Subcommittee on Cybersecurity and Infrastructure Protection, and John Moolenaar, a Michigan Republican and chair of the House Select Committee on the Strategic Competition between the United States and the Chinese Communist Party sent letters to Anysphere and Airbnb, raising concerns about the companies’ use of or exposure to these risks through PRC-developed AI. 

The move follows an April 2026 memo from the White House Office of Science and Technology Policy warning that foreign entities, primarily based in China, are conducting deliberate, industrial-scale campaigns to distill U.S. frontier AI systems through proxy accounts and other coordinated methods.

Clearly, the investigation comes amid growing concern that PRC-based AI companies are using unauthorized model distillation and other illicit techniques to extract capabilities from leading American frontier models, then repackaging those capabilities into lower-cost models without the same safeguards included in the original American models, which are then marketed or made available to U.S. companies, developers, and consumers. While model distillation can be a legitimate AI development technique, distillation conducted through fraudulent accounts, proxy networks, evasion of access restrictions, or violations of U.S. companies’ terms of service raises serious concerns about model provenance, intellectual property, cybersecurity, and supply-chain risk.

In their letter to Anysphere, the Chairmen focus on Cursor’s Composer 2 model, which was reportedly built on an open-weight model developed by Moonshot AI, a PRC-based company publicly implicated in large-scale distillation campaigns targeting American AI systems.

In the letter to Anysphere, the Chairmen wrote, “The billions of dollars American companies invest in foundational research, compute infrastructure, and security engineering is being undercut by a sustained extraction campaign conducted at a fraction of the cost of independent development. This threat is not limited to commercial harm. American frontier AI laboratories invest heavily in security testing and in building guardrails designed to prevent their models from being used to develop weapons, automate software vulnerability discovery and exploitation, generate tailored disinformation, or assist in the synthesis of dangerous chemical or biological agents. When capabilities are stripped out through distillation and repackaged without equivalent safeguards, the resulting models may become available to hostile state actors, terrorist organizations, and criminal enterprises.”

Flagging the April 21 announcement, where Cursor announced a partnership with Chainguard, an open-source security company, to steer AI-generated code toward vetted open-source components and reduce the risk that developers unknowingly pull vulnerable or malicious libraries and container images into production environments, the letter highlighted that the “development is notable because it reflects an apparent acknowledgment by Cursor that agentic and ‘vibe coded’ development can cause dependency selection and package inclusion decisions to occur at a scale and speed that outpaces ordinary human review, and because it highlights that the security of an AI coding environment depends not only on the model itself, but also on the provenance and integrity of the packages, libraries, and images the system recommends, retrieves, or incorporates into downstream software.” 

They added that in environments handling sensitive government, defense-industrial, or critical infrastructure code, those software supply chain risks carry obvious national security implications.

The House committees are requesting detailed records from Anysphere as part of an investigation into national security risks linked to the theft of U.S. AI capabilities, the use of PRC-developed open-weight models, and their integration into tools used across the American economy, including in government, defense, and critical infrastructure contexts. The request is due May 13, 2026, and focuses on any ties to Chinese AI firms, including Moonshot AI, DeepSeek, MiniMax, Alibaba, Zhipu AI, ByteDance, Tencent, and Baidu, such as partnerships, licensing arrangements, technical collaborations, and financial relationships.

Lawmakers are also seeking extensive documentation on Anysphere’s use of Moonshot AI’s Kimi K2.5 model in its Composer 2 product, including alternative models considered, risk assessments, legal and security analyses, and decisions around disclosure of model provenance. In addition, the request covers detailed technical explanations of data flows in Cursor products, third-party data handling agreements, security testing of integrated models, and steps taken to prevent data exposure to PRC-linked systems. It also asks how Anysphere ensures compliance with U.S. security standards, verifies model integrity, and discloses model origin and risks to customers.

The letter adds, “The Committees further request that appropriate personnel from Anysphere appear for an in-person briefing on these matters, including the issues identified in this letter and Anysphere’s response thereto, no later than May 20, 2026.”

 In their letter to Airbnb, the Chairmen say they are investigating what they describe as a broader pattern of PRC-based AI labs allegedly using adversarial distillation to extract capabilities from leading U.S. frontier models, redistributing them as open-weight systems, and embedding those models into widely used American products. They frame this as part of a wider Chinese state-linked effort to accelerate AI development through espionage, intellectual property theft, and other unlawful or deceptive means, raising concerns about the downstream use of PRC-origin models in commercial and public-sector systems in the U.S.

The letter specifically questions Airbnb’s reported use of Alibaba’s Qwen model in customer service, citing its ‘fast and cheap’ performance as justification, while warning of national security and data security risks. It outlines three main concerns: ideological control and censorship embedded in Chinese AI systems under PRC law, elevated safety vulnerabilities and higher failure rates in resisting malicious prompts compared with U.S. models, and data exposure risks when using API-based foreign models that may be subject to PRC legal obligations requiring cooperation with state authorities. The Committees argue these factors make the adoption of such models a structural national security risk rather than a simple cost or performance decision.

The Committees are requesting extensive documentation from Airbnb as part of a joint investigation into its use of PRC-origin AI models. The request covers identification of all Chinese-developed models currently used, tested, or evaluated by Airbnb, including how each is deployed, how it is accessed (API, self-hosted, or third-party), and whether any independent security testing of model weights was conducted before use.

Lawmakers are also seeking detailed technical disclosures on how user and corporate data flows to PRC-linked model providers, including infrastructure routes, server locations, and any entities subject to Chinese jurisdiction. In addition, the request demands internal analyses comparing PRC and non-PRC models, assessments of training data provenance and potential adversarial distillation, documentation of supply chain and model integrity audits, communications with Chinese AI providers, and records of all Airbnb customer and employee data processed by these models over time.

The Committees further request that appropriate personnel from Airbnb appear for an in-person briefing on these matters, including the issues identified in this letter and Airbnb’s response thereto, no later than May 20, 2026. 

In March, the Subcommittee on Cybersecurity and Infrastructure Protection held a hearing to evaluate the growing national security and economic risks posed by AI, robotics, and autonomous sensing technologies developed by companies linked to the PRC. Witnesses testified that technologies developed within adversarial controlled ecosystems can create significant vulnerabilities, enable surveillance, expose sensitive data, and provide access to critical infrastructure systems.

The World Economic Forum (WEF) warned last week that the emergence of advanced AI systems such as Anthropic’s Mythos marks a turning point for cybersecurity, in which machines can autonomously identify previously unknown vulnerabilities, generate exploits, and execute complex attack pathways with minimal human input. This shift collapses the traditional gap between defenders and attackers, accelerating both threat discovery and weaponization while raising concerns that existing security models are ill-equipped to manage the speed and scale of AI-driven cyber risk. 



Source link