Microsoft exposed four individuals behind an Azure Abuse scheme using unauthorized GenAI access to create harmful content.
Microsoft shared the names of four developers of malicious tools designed to bypass the guardrails of generative AI services, including Microsoft’s Azure OpenAI Service.
Microsoft is taking legal action against these defendants, dismantling their operation, and curbing misuse of its AI technology.
The four individuals are Arian Yadegarnia aka “Fiz” of Iran, Alan Krysiak aka “Drago” of United Kingdom, Ricky Yuen aka “cg-dot” of Hong Kong, China, and Phát Phùng Tấn aka “Asakuri” of Vietnam. These individuals are members of a global cybercrime ring tracked as Storm-2139 by Microsoft.
The defendants used exposed customer credentials scraped from public sources to unlawfully access accounts with certain generative AI services. Threat actors modified AI services, resold access, and provided guides to generate illicit content, including non-consensual intimate images of celebrities.
“This activity is prohibited under the terms of use for our generative AI services and required deliberate efforts to bypass our safeguards.” reads the announcement published by Microsoft. “We are not naming specific celebrities to keep their identities private and have excluded synthetic imagery and prompts from our filings to prevent the further circulation of harmful content.”
The IT giant started the investigation in December 2024 when Microsoft’s Digital Crimes Unit (DCU) filed a lawsuit in the Eastern District of Virginia alleging various causes of action against 10 unidentified individuals violating U.S. law and Microsoft’s Acceptable Use Policy and Code of Conduct.
The researchers identified three main categories of professionals composing the group Storm-2139, creators, providers, and users. Creators developed the tools to abuse of AI-generated services, providers then modified and supplied these tools to end users.
Microsoft reported that end users used the tools to generate illicit synthetic content, often centered around celebrities and sexual imagery.

Microsoft’s legal actions disrupted the operations of the cybercriminal group by seizing key infrastructure, sparking internal conflict and doxing attempts against its counsel. Members speculated on identities, exchanged blame, and leaked information, highlighting the lawsuit’s impact. Emails from suspected actors attempted to shift blame. The case demonstrates legal action’s power in dismantling cybercrime networks.
“We take the misuse of AI very seriously, recognizing the serious and lasting impacts of abusive imagery for victims.” concludes the announcement. “As we’ve said before, no disruption is complete in one day. Going after malicious actors requires persistence and ongoing vigilance. By unmasking these individuals and shining a light on their malicious activities, Microsoft aims to set a precedent in the fight against AI technology misuse.”
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, Microsoft’s Azure OpenAI Service)