CONSINTBENCH: EVALUATING LANGUAGE MODELS ON REAL-WORLD CONSUMER INTENT UNDERSTAND- ING

ICLR 2026 Conference Submission9273 Authors

17 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM evaluation; Human Intent
Abstract: Understanding human intent is a complex, high-level task for large language models (LLMs), requiring analytical reasoning, contextual interpretation, dynamic information aggregation, and decision-making under uncertainty. Real-world public discussions, such as consumer product discussions, are rarely linear or involve a single user. Instead, they are characterized by interwoven and often conflicting perspectives, divergent concerns, goals, emotional tendencies, as well as implicit assumptions and background knowledge about usage scenarios. To accurately understand such explicit public intent, an LLM must go beyond parsing individual sentences; it must integrate multi-source signals, reason over inconsistencies, and adapt to evolving discourse, similar to how experts in fields like politics, economics, or finance approach complex, uncertain environments. Despite the importance of this capability, no large-scale benchmark currently exists for evaluating LLMs on real-world human intent understanding, primarily due to the challenges of collecting real-world public discussion data and constructing a robust evaluation pipeline. To bridge this gap, we introduce \bench, the first dynamic, live evaluation benchmark specifically designed for intent understanding, particularly in the consumer domain. \bench is the largest and most diverse benchmark of its kind, supporting real-time updates while preventing data contamination through an automated curation pipeline. We evaluate 20 LLMs, spanning both open-source and closed-source models, across four core dimensions of consumer intent understanding: \textit{depth}, \textit{breadth}, \textit{informativeness}, and \textit{correctness}. Our benchmark provides a comprehensive and evolving evaluation standard for assessing LLM performance in understanding complex, real-world human intent, with the ultimate goal of advancing LLMs toward expert-level reasoning and analytical capabilities.
Primary Area: datasets and benchmarks
Submission Number: 9273
Loading