VoidLink shows how one developer used AI to build a powerful Linux malware

VoidLink is a cloud-focused Linux malware, likely built by one person using AI, offering loaders, implants, rootkit evasion, and modular plugins.

Check Point researchers uncovered VoidLink, a cloud-focused Linux malware framework likely built by a single developer with help from an AI model. VoidLink includes custom loaders, implants, rootkit-based evasion features, and dozens of plugins that extend its capabilities, making it a flexible and powerful threat.

“Until now, solid evidence of AI-generated malware has primarily been linked to inexperienced threat actors, as in the case of FunkSec, or to malware that largely mirrored the functionality of existing open-source malware tools. VoidLink is the first evidence based case that shows how dangerous AI can become in the hands of more capable malware developers.” reads the report published by CheckPoint.

“Operational security (OPSEC) failures by the VoidLink developer exposed development artifacts. These materials provide clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under a week.”

Researchers found VoidLink as a highly mature Linux malware that rapidly evolved into a full modular framework with rootkits, cloud and container attack modules. Although planning documents suggested a large team, leaks and timelines revealed AI-generated blueprints. In less than a week, a single developer likely used AI to build and scale VoidLink, showing how AI can quickly enable advanced malware.

According to Check Point, the developer first defined goals and constraints, then used an AI agent to design the architecture, split work across three virtual teams, and generate detailed plans, coding standards, and sprints. Leaked files show extensive Chinese-language documentation, timelines, and guidelines that closely match the recovered source code.

Although the plans described a 20–30 week effort, evidence shows VoidLink became fully functional in under a week, reaching over 88,000 lines of code. The case highlights how a single actor, guided by AI-generated specifications and planning, can rapidly build complex, high-quality malware.

“VoidLink’s development likely began in late November 2025, when its developer turned to TRAE SOLO, an AI assistant embedded in TRAE, an AI-centric IDE. While we do not have access to the full conversation history, TRAE automatically produces helper files that preserve key portions of the original guidance provided to the model. Those TRAE-generated files appear to have been copied alongside the source code to the threat actor’s server, and later surfaced due to an exposed open directory. This leakage gave us unusually direct visibility into the project’s earliest directives.” continues the report.

“In this case, TRAE generated a Chinese-language instruction document. These directives offer a rare window into VoidLink’s early-stage planning and the baseline requirements that set the project in motion. “

Researchers recreated VoidLink by following the leaked specs and sprints in the same TRAE IDE workflow. By feeding the markdown documentation to an AI model sprint by sprint, the system generated code that closely matched the real framework. Check Point confirmed that detailed guidelines and tests limited interpretation, allowed them to reproduce the results. Each sprint produced working code that could be committed and refined, with the developer acting as product owner. This approach offloaded most coding to AI and enabled rapid progress, mirroring the output of multiple professional teams.

“While not a fully AI-orchestrated attack, VoidLink demonstrates that the long-awaited era of sophisticated AI-generated malware has likely begun. In the hands of individual experienced threat actors or malware developers, AI can build sophisticated, stealthy, and stable malware frameworks that resemble those created by sophisticated and experienced threat groups.” concludes the report.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Artificial Intelligence)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter