MinorBench: A hand-built benchmark for content-based risks for children

Published: 06 Mar 2025, Last Modified: 10 Apr 2025ICLR 2025 Workshop AI4CHL PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Full paper
Keywords: LLM, risk, children, safety
TL;DR: Based on a real-world case study, we develop a taxonomy and benchmark for testing minor-specific risks for LLMs.
Abstract: Large Language Models (LLMs) are rapidly entering children’s lives — through parent-driven adoption, schools, and peer networks — yet current AI ethics and safety research do not adequately address content-related risks specific to minors. In this paper, we highlight these gaps with a real-world case study of an LLM-based chatbot deployed in a middle school setting, revealing how students used and sometimes misused the system. Building on these findings, we propose a new taxonomy of content-based risks for minors and introduce MinorBench, an open-source benchmark designed to evaluate LLMs on their ability to refuse unsafe or inappropriate queries from children. We evaluate six prominent LLMs under different system prompts, demonstrating substantial variability in their child-safety compliance. Our results inform practical steps for more robust, child-focused safety mechanisms and underscore the urgency of tailoring AI systems to safeguard young users.
Submission Number: 35
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview