Microsoft took legal action against crooks who developed a tool to abuse its AI-based services

In December, Microsoft sued a group for creating tools to bypass safety measures in its cloud AI products.

Microsoft filed a complaint with the Eastern District Court of Virginia against ten individuals for using stolen credentials and custom software to breach computers running Microsoft’s Azure OpenAI services to generate content for harmful purposes.

“Defendants used stolen customer credentials and custom-designed software to break into the computers running Microsoft’s Azure OpenAI Service. Defendants then used Microsoft’s computers and software for harmful purposes.” reads the compliant. “Microsoft respectfully seeks the Court’s assistance in putting a stop to Defendants’ illegal conduct and holding Defendants to account for what they have done.”

In July 2024 Microsoft discovered that stolen API keys from paying Azure OpenAI customers were used to generate content violating the service’s policies. The IT giant did not share details about the content that was generated. The company states that the illegal activity directly violates U.S. law and the Acceptable Use Policy and Code of Conduct for its services.

“Defendants collectively operate and/or control infrastructure, software, and technical artifacts used to carry out the violations of law described in this Complaint.”continues the compliant “To summarize briefly: Defendants illegally procured authentication information from legitimate Microsoft customers with malicious intent, trafficked and used that stolen customer authentication information to bypass Microsoft authentication gates and gain unauthorized access to Microsoft software and computer systems, and then exploited their unauthorized access to Microsoft’s software and computers to create harmful content in violation of Microsoft’s policies and through circumvention of Microsoft’s technical protective measures.”

The method used to steal the API key is unknown, it appears the defendants are accused of running a hacking-as-a-service operation, stealing API keys from Microsoft customers and selling access, exploiting generative AI tools for malicious purposes.

“Microsoft’s Digital Crimes Unit (DCU) is taking legal action to ensure the safety and integrity of our AI services. In a complaint unsealed in the Eastern District of Virginia, we are pursuing an action to disrupt cybercriminals who intentionally develop tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft’s, to create offensive and harmful content.” states Microsoft. “Microsoft continues to go to great lengths to enhance the resilience of our products and services against abuse; however, cybercriminals remain persistent and relentlessly innovate their tools and techniques to bypass even the most robust security measures. With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated.  “

The US court authorized Microsoft to seize a website “instrumental” to the defendants’ operation and disrupt any additional technical infrastructure the company finds.

Microsoft confirmed that it has locked out the crooks and adopted additional countermeasures to prevent future abuses of its services.

“Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future. ” concludes Microsoft.       

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, AI)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter