DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation
External IDs:doi:10.1145/3650212.3680304
Loading
OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2026 OpenReview