Keywords: Benchmark, Chinese-specific, Large Language Models, Safety, Toxicity, Ethics
Abstract: Large language models (LLMs) are increasingly deployed in cost-sensitive and on-device scenarios, and safety guardrails have advanced mainly in English. However, real-world Chinese malicious queries typically conceal intent via `homophones`, `pinyin`, `symbol-based splitting`, and other Chinese-specific patterns. These Chinese-specific adversarial patterns create the safety evaluation gap that is not well captured by existing benchmarks focused on English. This gap is particularly concerning for lightweight models, which may be more vulnerable to such specific adversarial perturbations. To bridge this gap, we introduce the **C**hinese-**S**pecific **S**afety **Bench**mark (**CSSBench**) that emphasizes these adversarial patterns and evaluates the safety of lightweight LLMs in Chinese. Our benchmark covers six domains that are common in real Chinese scenarios, including **illegal activities and compliance**, **privacy leakage**, **health and medical misinformation**, **fraud and hate**, **adult content**, and **public and political safety**, and organizes queries into multiple task types. We evaluate a set of popular lightweight LLMs and measure over-refusal behavior to assess safety-induced performance degradation. Our results show that the Chinese-specific adversarial pattern is a critical challenge for lightweight LLMs. This benchmark offers a comprehensive evaluation of LLM safety in Chinese, assisting robust deployments in practice.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: model bias/fairness evaluation, NLP datasets, benchmarking, evaluation, Safety and Alignment in LLMs
Contribution Types: Data resources
Languages Studied: Chinese
Submission Number: 820
Loading