Contrast Security extends its application security testing (AST) platform to support testing of Large Language Models (LLMs) from OpenAI. In this first release, Contrast rules help teams that are developing software using the OpenAI application programming interface (API) set to identify and mitigate weaknesses that could expose an organization to prompt injection vulnerabilities: i.e., attacks involving injection of a prompt that deceives the application into executing unauthorized code. Prompt injection was identified as the top … More
The post Contrast Security helps organizations identify susceptible data flows to their LLMs appeared first on Help Net Security.