Iran and China-linked actors used ChatGPT for preparing attacks

OpenAI disrupted 20 cyber and influence operations in 2023, revealing Iran and China-linked actors used ChatGPT for planning ICS attacks.

OpenAI announced the disruption of over 20 cyber and influence operations this year, involving Iranian and Chinese state-sponsored hackers.

The company uncovered the activities of three threat actors abusing ChatGPT to launch cyberattacks. One of these groups is CyberAv3ngers, which is a threat actor linked to the Iranian Iranian Islamic Revolutionary Guard Corps (IRGC).

In the past, the group targeted industrial control systems at water utilities in Ireland and the U.S.. Rather than using advanced hacking techniques, they exploited systems with default credentials to compromise target networks.

Observed ChatGPT behavior mainly involved reconnaissance, threat actors used the OpenAI’s platform to seek info on companies, services, and vulnerabilities, similar to search engine queries. Attackers also used it for code debugging assistance.

 “The tasks the CyberAv3ngers asked our models in some cases focused on asking for default username and password combinations for various PLCs. In some cases, the details of these requests suggested an interest in, or targeting of, Jordan and Central Europe. The operators also sought support in creating and refining bash and python scripts. These scripts sometimes leveraged publicly available pentesting tools and security services to programmatically find vulnerable infrastructure.” reads the OpenAI’s report. “CyberAv3nger accounts also asked our models high-level questions about how to obfuscate malicious code, how to use various security tools often associated with post-compromise activity, and for information on both recently disclosed and older vulnerabilities from a range of products.”

Beyond previous reports on this threat actor’s focus on ICS and PLCs, the prompts observed during this campaign provide precious information on other technologies and software the state-sponsored hackers may target.

OpenAI remarks that the interactions with its models did not give CyberAv3ngers new capabilities but only minor, incremental tools already accessible via public, non-AI resources.

OpenAI’s report also detailed the use of ChatGPT by another Iranian threat actor, tracked Storm-0817.

The group used the chatbot to receive support in Android malware development and to create a scraper for the social media platform Instagram.

“This actor used our models to debug malware, for coding assistance in creating a basic scraper for Instagram, and to translate LinkedIn profiles into Persian. This included working on malware that was still in development, and looking for information on potential targets.” continues the report. “STORM-0817 asked our models for debugging and coding support in implementing Android malware and the corresponding command and control infrastructure. The malware targeted Android and was relatively rudimentary. Code snippets in attacker supplied prompts indicated it had standard surveillanceware capabilities”

OpenAI finally reported that China-linked group SweetSpectre used ChatGPT for reconnaissance, vulnerability research, malware development, and social engineering. They also attempted to send malware-laden emails to OpenAI employees, but the spear-phishing campaign was detected and neutralized.

The report also includes information on other covert influence operations, information shared by the OpenAI is very useful to better profile threat actors behind these campaigns and gather intelligence on their tactics, techniques and procedures.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, OpenAI)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter