DIG AI: Uncensored Darknet AI Assistant at the Service of Criminals and Terrorists

Resecurity reports a Q4 2025 surge in criminal use of DIG AI on Tor, enabling scalable illicit activity and posing new risks ahead of major 2026 events.

During Q4 2025, Resecurity observed a notable increase in malicious actors utilizing DIG AI, accelerating during the Winter Holidays, when illegal activity worldwide reached a new record. With important events scheduled for 2026, including the Winter Olympics in Milan and the FIFA World Cup, criminal AI may pose new threats and security challenges, enabling bad actors to scale their operations and bypass content protection policies.

DIG AI enables malicious actors to leverage the power of AI to generate tips ranging from explosive device manufacturing to illegal content creation, including CSAM. Because DIG AI is hosted on the TOR network, such tools are not easily discoverable and accessible to law enforcement. They create a significant underground market – ranging from piracy and derivatives to other illicit activities.

Nevertheless, there are significant initiatives, such as AI for Good, established in 2017 by the International Telecommunication Union (ITU) and the United Nations (UN) agency for digital technologies, which promote the responsible use of new technologies.

DIG AI

However, bad actors will focus on the complete opposite – weaponizing and misusing AI.

Resecurity has confirmed that DIG AI can facilitate the production of CSAM content. The tool could enable the creation of hyper-realistic, explicit images or videos of children – either by generating entirely synthetic content or by manipulating benign images of real minors. This issue will present a new challenge for legislators in combating the production and distribution of CSAM content. Investigators engaged with relevant law enforcement authorities (LEA) to collect and preserve evidence of bad actors producing highly realistic CSAM content using DIG AI- sometimes labeled as “synthetic,” but in fact, interpreted as illegal.

In 2024, a US child psychiatrist was convicted for producing and distributing AI-generated CSAM by digitally altering images of real minors. The images were so realistic that they met the US federal threshold for CSAM. Law enforcement and child safety organizations report a sharp increase in AI-generated CSAM, with offenders including both adults and minors (e.g., classmates creating deepfake nudes for bullying or extortion). The EU, UK, and Australia have enacted laws specifically criminalizing AI-generated CSAM, regardless of whether real children are depicted. In 2025, Resecurity collected numerous indicators that AI is already actively being used by criminals, and it is expected that new types of high-technology crimes leveraging AI will emerge in 2026.

Resecurity believes that the Internet community will face ominous security challenges enabled by AI in 2026, when, in addition to human actors, criminal and weaponized AI will transform traditional threats and create new risks targeting our society at a pace never seen before. The cybersecurity and law enforcement professionals should be concerned about the emergence of such dangerous precursors and be prepared to continue the fight against the “machine” – in the fifth domain of warfare: cyber.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, DIG AI)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter