Automatically Advancing LLM Expertise in Technology Judgment

ICLR 2026 Conference Submission22563 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: technology judgement, LLM understanding, benchmark, self-questioning
TL;DR: Using a new dataset of 1.3 million patent pairs, we test self-questioning as both a practical mechanism for automatically enhancing LLM comprehension of technologies and a diagnostic probe into how internal and external knowledge are organized.
Abstract: Large language models (LLMs) are rapidly becoming core tools for science, engineering, and innovation. Their promise lies not just in remembering facts, but in putting knowledge to work. Despite their impressive ability to answer increasingly difficult questions, it remains unclear whether LLMs truly use their knowledge when confronted with new and challenging tasks. We address this question with a patent classification task that requires deep conceptual understanding: distinguishing objectively different but semantically similar patents. To evaluate this approach, we introduce a challenging new benchmark of 1.3 million post-2015 computer science patent pairs, characterized by dense technical jargon and strategically complex writing. We find that LLMs often fail our benchmark and struggle to distinguish among semantically similar patents. To probe this failure, we introduce a novel framework that decomposes model errors into two sources: missing and unused knowledge. Our approach asks models to generate clarifying questions to improve their understanding, and then compares three settings: raw performance, self-answered questions, and externally supplied answers. This decomposition reveals that LLMs often possess the relevant knowledge internally but fail to deploy it, while a smaller share of errors arises from genuine knowledge gaps. We then ask whether the ability of models to construct a task-specific database of questions and answers differs across models. We find that smaller models generate simpler, broadly transferable questions, while larger models propose more complex but less generalizable ones. This suggests new strategies for combining strengths across models. Taken together, our findings highlight a critical limitation of current LLMs and their evaluation: models often know more than they can use. By shifting evaluation from recall of static facts to application of dynamic knowledge, our approach provides a more informative lens on model capabilities and opens a path toward building systems that better support technological discovery and innovation.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 22563
Loading