NVIDIA research shows how agentic AI fails under attack

Enterprises are rushing to deploy agentic systems that plan, use tools, and make decisions with less human guidance than earlier AI models. This new class of systems also brings new kinds of risk that appear in the interactions between models, tools, data sources, and memory stores. A research team from NVIDIA and Lakera AI has released a safety and security framework that tries to map these risks and measure them inside real workflows. The work … More

The post NVIDIA research shows how agentic AI fails under attack appeared first on Help Net Security.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter