Most AI privacy research looks the wrong way

Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors argue that while most technical studies target data memorization, the biggest risks come from how LLMs collect, process, and infer information during regular use. A narrow view of privacy research The study reviewed 1,322 AI and machine learning privacy papers published between 2016 and 2025. It found that … More

The post Most AI privacy research looks the wrong way appeared first on Help Net Security.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter