Google sounds alarm on self-modifying AI malware

Google warns malware now uses AI to mutate, adapt, and collect data during execution, boosting evasion and persistence.

Google’s Threat Intelligence Group (GTIG) warn of a new generation of malware that is using AI during execution to mutate, adapt, and collect data in real time, helping it evade detection more effectively.

Cybercriminals increasingly use AI to build malware, plan attacks, and craft phishing lures. Recent research shows AI-driven ransomware like PromptLock can adapt during execution.

GTIG reports a new phase of AI abuse: attackers now deploy AI-powered malware that adapts behavior during execution.

” For the first time, GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, that use Large Language Models (LLMs) during execution. These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware.” reads the report published by Google. “While still nascent, this represents a significant step toward more autonomous and adaptive malware.”

In 2025, Google identified the first malware using AI mid-execution to change its behavior dynamically. While current examples are mostly experimental, they signal a shift toward AI-integrated cyberattacks. Attackers are moving past using AI merely for support or coding help, marking the start of a trend likely to grow in future intrusion campaigns.

Below the list of malware with novel AI capabilities GTIG detected in 2025:

Malware Function Description Status
FRUITSHELL Reverse Shell Publicly available reverse shell written in PowerShell that establishes a remote connection to a configured command-and-control server and allows a threat actor to execute arbitrary commands on a compromised system. Notably, this code family contains hard-coded prompts meant to bypass detection or analysis by LLM-powered security systems. Observed in operations
PROMPTFLUX Dropper Dropper written in VBScript that decodes and executes an embedded decoy installer to mask its activity. Its primary capability is regeneration, which it achieves by using the Google Gemini API. It prompts the LLM to rewrite its own source code, saving the new, obfuscated version to the Startup folder to establish persistence. PROMPTFLUX also attempts to spread by copying itself to removable drives and mapped network shares. Experimental
PROMPTLOCK Ransomware Cross-platform ransomware written in Go, identified as a proof of concept. It leverages an LLM to dynamically generate and execute malicious Lua scripts at runtime. Its capabilities include filesystem reconnaissance, data exfiltration, and file encryption on both Windows and Linux systems. Experimental
PROMPTSTEAL Data Miner Data miner written in Python and packaged with PyInstaller. It contains a compiled script that uses the Hugging Face API to query the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands. Prompts used to generate the commands indicate that it aims to collect system information and documents in specific folders. PROMPTSTEAL then executes the commands and sends the collected data to an adversary-controlled server. Observed in operations
QUIETVAULT Credential Stealer Credential stealer written in JavaScript that targets GitHub and NPM tokens. Captured credentials are exfiltrated via creation of a publicly accessible GitHub repository. In addition to these tokens, QUIETVAULT leverages an AI prompt and on-host installed AI CLI tools to search for other potential secrets on the infected system and exfiltrate these files to GitHub as well. Observed in operations

Table 1: Overview of malware with novel AI capabilities GTIG detected in 2025

Google’s Threat Intelligence Group documented early, experimental malware that directly leverages large language models to adapt and evade detection. PROMPTFLUX, a VBScript dropper found in June 2025, queries Gemini to request VBScript obfuscation and evasion code, logging AI responses and containing a “Thinking Robot” module that aims to fetch new evasive code just-in-time; its full self-update routine appears under development and some features remain commented out. Variants instruct Gemini to rewrite the script hourly as an “expert VBScript obfuscator,” embedding API keys and self-regeneration logic to create recursive metamorphism. Although PROMPTFLUX shows proof-of-concept capabilities rather than active network compromise, Google disabled associated assets and strengthened model protections. Separately, GTIG observed APT28 using PROMPTSTEAL (aka LAMEHUG), a data-miner that queries an LLM (Qwen2.5-Coder) via Hugging Face during live operations to generate system- and file-collection commands on the fly; PROMPTSTEAL likely uses stolen API tokens and blindly executes LLM-generated commands to harvest documents and system info before exfiltration.

“PROMPTSTEAL likely uses stolen API tokens to query the Hugging Face API. The prompt specifically asks the LLM to output commands to generate system information and also to copy documents to a specified directory.” reads the report published by Google. “The output from these commands are then blindly executed locally by PROMPTSTEAL before the output is exfiltrated. Our analysis indicates continued development of this malware, with new samples adding obfuscation and changing the C2 method.”

GTIG flagged multiple AI-enabled malware in the wild such as FruitShell, which is a PowerShell reverse shell that runs arbitrary commands and embeds hardcoded AI prompts meant to evade AI-powered defenses.

QuietVault, a JavaScript credential stealer, hunts NPM and GitHub tokens using onsite AI prompts and CLI tools to find secrets.

Together, these cases mark a shift from AI-as-tooling to AI-in-the-loop malware, signaling an emerging threat trajectory that defenders must anticipate and mitigate.

Google warns that in 2025, the underground cybercrime market for AI-powered tools evolved significantly. GTIG found numerous multifunctional AI tools supporting all attack phases, especially phishing campaigns. Many mirrored legitimate SaaS models, offering free versions with ads and paid tiers for advanced features like image generation, API access, and Discord integration.ù

 The report also detailed how nation-state actors misused generative AI tools in thier operations.

“State-sponsored actors from North Korea, Iran, and the People’s Republic of China (PRC) continue to misuse generative AI tools including Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to C2 development and data exfiltration.” concludes the report.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, malware)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter