Beyond Blind Spots: Analytic Hints for Mitigating LLM-Based Evaluation Pitfalls

Published: 06 Nov 2025, Last Modified: 06 Nov 2025AIR-FM PosterEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Keywords: llm-as-a-judge, hints, evaluation, cobol
TL;DR: Analytic hint injection boosts LaaJ reliability in COBOL code evaluation, raising error detection from ~45 % to 94 % while preserving general reasoning
Abstract: Large Language Models are increasingly deployed as judges (LaaJ) in code generation pipelines. While attractive for scalability, LaaJs tend to overlook domain-specific issues raising concerns about their reliability in critical evaluation tasks. To better understand these limitations in practice, we examine LaaJ behavior in a concrete industrial use case: legacy code modernization via COBOL code generation. In this setting, we find that even production-deployed LaaJs can miss domain-critical errors, revealing consistent blind spots in their evaluation capabilities. To better understand these blind spots, we analyze generated COBOL programs and associated LaaJs judgments, drawing on expert knowledge to construct a preliminary taxonomy. Based on this taxonomy, we develop a lightweight analytic checker tool that flags over 30 domain-specific issues observed in practice. We use its outputs as {\it analytic hints}, dynamically injecting them into the judge’s prompt to encourage LaaJ to revisit aspects it may have overlooked. Experiments on a test set of 100 programs using four production-level LaaJs show that LaaJ alone detects only about 45\% of the errors present in the code (in all judges we tested), while the analytic checker alone lacks explanatory depth. When combined, the LaaJ+Hints configuration achieves up to 94\% coverage (for the best-performing judge and injection prompt) and produces qualitatively richer, more accurate explanations, demonstrating that analytic–LLM hybrids can substantially enhance evaluation reliability in deployed pipelines.
Submission Track: Workshop Paper Track
Submission Number: 34
Loading