Threat actors are now leveraging the trust users place in AI platforms like ChatGPT and Grok to distribute the Atomic macOS Stealer (AMOS).
A new campaign discovered by Huntress on December 5, 2025, reveals that attackers have moved beyond mimicking trusted brands to actively utilizing legitimate AI services to host malicious payloads.
The infection chain begins with a routine Google search. Users querying common troubleshooting phrases such as “Clear disk space on macOS” are presented with high-ranking results that appear to be helpful guides hosted on legitimate domains: chatgpt.com and grok.com.

Unlike traditional SEO poisoning, which directs victims to compromised websites, these links lead to actual, shareable conversations on OpenAI and xAI platforms.
Once the user clicks the link, they are presented with a professional-looking troubleshooting guide. The conversation, generated by the attacker, instructs the user to open the macOS Terminal and copy-paste a specific command to “safely clear system data.”

Because the advice appears to come from a trusted AI assistant on a reputable domain, users often bypass their usual security skepticism.
ChatGPT and Grok Conversations Weaponized
According to Huntress’ analysis, the executed command does not download a traditional file that would trigger macOS Gatekeeper warnings. Instead, it executes a base64-encoded script that downloads a variant of the AMOS stealer.
The malware employs a “living-off-the-land” technique to harvest credentials without a graphical prompt. It utilizes the native dscl utility to validate the user’s password silently in the background.
Once validated, the password is piped into sudo -S to grant root privileges, allowing the malware to install persistence mechanisms and exfiltrate data without further user interaction.
The following artifacts and behaviors have been identified as key indicators of this campaign:
| Category | Indicator / Behavior | Context |
|---|---|---|
| Persistence | /Library/LaunchDaemons/com.finder.helper.plist | A hidden executable was dropped in the user’s home directory. |
| File Path | /Users/$USER/.helper | Used to validate captured credentials without GUI prompts silently. |
| File Path | /tmp/.pass | Temporary file used to store the plaintext password during escalation. |
| Command | dscl -authonly |
Used to silently validate captured credentials without GUI prompts. |
| Command | sudo -S | Used to accept the password via standard input for root access. |
| Network | LaunchDaemon is created for persistence. | Known C2 URL for the initial payload delivery (Base64 decoded). |
This campaign is perilous because it exploits “behavioral trust” rather than technical vulnerabilities. The attack circumvents traditional defenses like Gatekeeper because the user explicitly authorizes the command in the Terminal.
Security teams are advised to monitor for anomalous osascript execution and unusual dscl usage, particularly when associated with curl commands.
For end users, the primary defense is behavioral: legitimate AI services will not request that users execute opaque, encoded Terminal commands for routine maintenance tasks.
The shift to using trusted AI domains as hosting infrastructure introduces a new chokepoint for defenders, who must now scrutinize traffic to those platforms for malicious patterns.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
