AI models are being folded into fraud and influence operations that follow long standing tactics. A February 2026 update to OpenAI’s Disrupting Malicious Uses of Our Models report details how ChatGPT and related API access were used in romance scams, fake legal services, coordinated influence campaigns, and a state linked harassment effort.
Six tweets whose text matches a batch of comments generated by the main ChatGPT account in this operation, and posted online by six different X accounts. (Source: OpenAI)
The document lays out specific cases where accounts were banned for abusing the company’s tools. The cases span financially motivated crime and government aligned information operations. The models supported drafting, translation, persona development, and internal coordination. Distribution systems and account reach shaped impact.
Romance and recovery scams
One of the most detailed cases, Operation Date Bait, describes a semi automated romance and task scam targeting men in Indonesia. A cluster of ChatGPT accounts and one API customer generated promotional text for a fake dating service and placed paid social media ads directing users to Telegram.
Once engaged, victims were guided through staged “missions” that required escalating payments. Internal messages reviewed by investigators indicated the network was interacting with hundreds of targets at a time and claiming daily revenue in the thousands of dollars. The report states those claims could not be independently verified.
The operation also used the models to translate messages between Chinese speaking supervisors and Indonesian workers and to generate internal status reports assigning each target a projected payout value.
A second scam, Operation False Witness, involved actors posing as law firms and impersonating U.S. authorities, including the FBI’s Internet Crime Complaint Center. They used ChatGPT to generate legal style messages and fabricate credentials. Victims were instructed to pay upfront fees, including a 15% service fee, before receiving supposed recovered funds.
Recruitment style outreach
Operation Silver Lining Playbook focused on outreach to U.S. state level officials and policy professionals. A small set of accounts likely originating in China used ChatGPT to draft consulting invitations from a Hong Kong based firm.
Prompts instructed the model to establish legitimacy, personalize messages using public background information, and move conversations to video calls on WhatsApp, Zoom, or Teams. The same accounts queried publicly available information about federal office locations and personnel distribution. There was no evidence the outreach resulted in engagement.
Coordinated influence campaigns
Several disruptions detail Russia linked coordinated inauthentic behavior.
Operation Trolling Stone generated Spanish language articles and comments about the arrest in Argentina of an alleged Russian cult leader. Fake news pages published AI generated articles, and networks of accounts posted matching comments across platforms to simulate grassroots engagement.
Most of the Facebook pages involved had only a few hundred followers and limited interaction. Some articles appeared on regional Argentine news sites. The activity was assessed toward the low end of Category 4 on the company’s internal impact scale.
Operation No Bell published geopolitical commentary in sub Saharan Africa under the byline of a fabricated academic whose credentials could not be verified. Social media reach was limited. Several articles were placed on African news websites.
In these cases, the models were used to draft articles and comments. Platform placement and existing networks shaped visibility.
Content production and state linked activity
Operation Fish Food, linked to the Rybar network, shows how ChatGPT functioned as a content production engine. Accounts generated batches of multilingual posts that were later distributed within Telegram and X. In one example cited in the report, tweets generated from a single prompt were posted by different accounts. One exceeded 150,000 views while another drew minimal attention. The variation aligned with differences in account reach.
The same network used the models to help draft proposals for influence services in Africa, including election related activity, with projected annual budgets reaching $600,000.
The most extensive case involved an account linked to an individual associated with Chinese law enforcement. The user attempted to use ChatGPT to plan a covert influence campaign targeting the Japanese prime minister. The model refused. The same user later submitted status reports describing a broader program involving hundreds of staff, thousands of fake accounts, and activity throughout more than 300 foreign social media platforms.
The report concludes that AI models are being integrated into established fraud and influence operations. They accelerate message production and coordination. Impact continues to depend on distribution networks and account infrastructure.



