Online platforms are running large language models at every stage of LLM content moderation, from generating training data to auditing their own systems for bias. Researchers at Google mapped how this is happening across what the authors call the Abuse Detection Lifecycle, a four-stage framework covering labeling, detection, review and appeals, and auditing. Earlier moderation systems, built on models like BERT and RoBERTa fine-tuned on static hate-speech datasets, could identify explicit slurs with reasonable accuracy. … More
The post Google study finds LLMs are embedded at every stage of abuse detection appeared first on Help Net Security.
