Actively Learning Horn Envelopes from LLMs

ICLR 2026 Conference Submission19140 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Language Models, Knowledge Extraction, Active Learning, Gender-Occupation Relationships
Abstract: We propose a new strategy for extracting Horn rules from Large Language Models (LLMs). Our approach is based on Angluin's classical exact learning framework where a learner actively learns a target formula in Horn logic by posing membership and equivalence queries. While membership queries naturally fit into the question-answering setup of LLMs, equivalence queries are more challenging. Previous works have simulated equivalence queries in ways that often lead to a large number of uninformative queries, making such simulations prohibitively expensive for modern LLMs. Here, we propose a new adaptive prompting strategy for posing equivalence queries, making use of the generative capabilities of modern LLMs to directly elicit counterexamples. We consider a case study where we learn rules describing gender-occupation relationships from the LLM. We evaluate our approach on the number of queries asked, the alignment of the extracted rules with historical data, and the consistency across runs. We show that our approach is able to extract Horn rules aligned with historical data with high confidence while requiring orders of magnitude fewer queries than previous methods.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 19140
Loading