Dynamic Content Moderation in Livestreams: Combining Supervised Classification with MLLM-Boosted Similarity Matching
Abstract: Content moderation remains a critical yet challenging task for large-scale user-generated video platforms, especially in livestreaming
environments where moderation must be timely, multimodal, and robust to evolving forms of unwanted content. We present a hybrid
moderation framework deployed at production scale that combines supervised classification for known violations with reference-based
similarity matching for novel or subtle cases. This hybrid design enables robust detection of both explicit violations and novel edge
cases that evade traditional classifiers. Multimodal inputs (text, audio, visual) are processed through both pipelines, with a multimodal
large language model (MLLM) distilling knowledge into each to boost accuracy while keeping inference lightweight. In production, the classification pipeline achieves 67% recall at 80% precision, and the similarity pipeline achieves 76% recall at 80% precision.
Large-scale A/B tests show a 6–8% reduction in user views of unwanted livestreams. These results demonstrate a scalable and
adaptable approach to multimodal content governance, capable of addressing both explicit violations and emerging adversarial
behaviors.
Loading