Protect AI announced Guardian which enables organizations to enforce security policies on ML Models to prevent malicious code from entering their environment. Guardian is based on ModelScan, an open-source tool from Protect AI that scans machine learning models to determine if they contain unsafe code. Guardian brings together the best of Protect AI’s open source offering, and enables enterprise level enforcement and management of model security, and extends coverage with proprietary scanning capabilities. The growing … More
The post Protect AI Guardian scans ML models to determine if they contain unsafe code appeared first on Help Net Security.