Protect AI Guardian scans ML models to determine if they contain unsafe code

Protect AI announced Guardian which enables organizations to enforce security policies on ML Models to prevent malicious code from entering their environment. Guardian is based on ModelScan, an open-source tool from Protect AI that scans machine learning models to determine if they contain unsafe code. Guardian brings together the best of Protect AI’s open source offering, and enables enterprise level enforcement and management of model security, and extends coverage with proprietary scanning capabilities. The growing … More

The post Protect AI Guardian scans ML models to determine if they contain unsafe code appeared first on Help Net Security.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter