Keywords: Program Synthesis, Automated Assessment, Weak Supervision, Reward Model, LLM-as-judge
TL;DR: We propose a low-cost, bias-reduced evaluation system to assess LLM through aggregated synthesized judging programs.
Track: Short Paper (up to 4 pages)
Abstract: Large language models (LLMs) are widely used to evaluate the quality of LLM generations and responses, but this leads to significant challenges: high API costs, uncertain reliability, inflexible pipelines, and inherent biases. To address these, we introduce **PAJAMA** (Program-As-a-Judge Automated Model Assessment), a new alternative that uses LLMs to *synthesize executable judging programs* instead of directly scoring responses. These synthesized programs can be stored and run locally, costing orders of magnitude less while providing interpretable, and auditable judging logic that can be easily adapted. Program-based judges mitigate biases, improving judgment consistency by **15.83%** and reducing biased responses by **23.7%** on average compared to a Qwen2.5-14B-based LLM-as-a-judge. When program judgments are distilled into a model, PAJAMA outperforms LLM-as-a-judge on the challenging CHAT-HARD subset of RewardBench, outperforming metrics by **2.19%** on Prometheus and **8.67%** on JudgeLM dataset, all at three orders of magnitude lower cost.
Format: We have read the camera-ready instructions, and our paper is formatted with the provided template.
Supplementary Material: zip
De-Anonymization: This submission has been de-anonymized.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 24
Loading