From Ambiguity to Verdict: A Semiotic‑Grounded Multi‑Perspective Agent for LLM Logical Reasoning

ICLR 2026 Conference Submission6656 Authors

16 Sept 2025 (modified: 18 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Semiotic Logic, Logic Reasoning, LLM Agent
TL;DR: We propose LogicAgent, a multi-perspective reasoning framework that improves logical reasoning under ambiguity by jointly evaluating contradictory and contrary views.
Abstract: **Logical reasoning** is a fundamental capability of large language models (LLMs). However, existing studies largely overlook the interplay between ***logical complexity*** and ***semantic complexity***, resulting in methods that struggle to address challenging scenarios involving abstract propositions, ambiguous contexts, and conflicting stances—features central to human reasoning. We propose **LogicAgent**, a *semiotic-square–guided framework* that jointly addresses these two axes of difficulty. The semiotic square provides a principled structure for multi-perspective semantic analysis, and LogicAgent integrates automated deduction with reflective verification to manage logical complexity across deeper reasoning chains. To support evaluation under these conditions, we introduce **RepublicQA**, a benchmark that couples semantic complexity with logical depth. RepublicQA reaches ***college-level semantic difficulty (FKGL 11.94)***, contains philosophically grounded abstract propositions with systematically constructed contrary and contradictory forms, and offers the most semantically rich setting for assessing logical reasoning in LLMs. Experiments demonstrate that **LogicAgent** achieves *state-of-the-art performance* on RepublicQA, with a **6.25%** average gain over strong baselines, and generalizes effectively to mainstream logical reasoning benchmarks including ProntoQA, ProofWriter, FOLIO, and ProverQA, achieving an additional **7.05%** average gain. These results highlight the strong effectiveness of our **semiotic-grounded multi-perspective reasoning** in boosting LLMs’ logical performance.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 6656
Loading