RLIE: Rule Generation with Logistic Regression, Iterative Refinement, and Evaluation for Large Language Models
Keywords: Rule Learning, Neuro-Symbolic, LLM
Abstract: Nowadays, Large Lange Models (LLMs) are able to propose rules in natural language, overcoming constrains of a predefined predicate space inherent in traditional rule learning. However, existing methods using LLMs often overlook the combination effects of rules, and the potential of coupling LLMs with probabilistic rule learning to ensure robust inference is not fully explored.
To address this gap, we introduce **RLIE**, a unified framework that integrates LLMs with probabilistic modeling to learn a set of probabilistic rules.
The RLIE framework comprises four stages: (1) **R**ule generation, where a LLM proposes and filters candidate rules; (2) **L**ogistic regression, which learns the probabilistic weights of the rules for global selection and calibration; (3) **I**terative refinement, which continuously optimizes the rule set based on prediction errors; and (4) **E**valuation, which compares the performance of the weighted rule set as a direct classifier against various methods of injecting the rules into an LLM.
Generated rules are the evaluated with different inference strategies on multiple real-world datasets. While applying rules directly with corresponding weights brings us superior performance, prompting LLMs with rules, weights and classification results from the logistic model will surprising degrade the performance.
This result aligns with the observation that LLMs excel at semantic generation and interpretation but are less reliable at fine-grained, controlled probabilistic integration.
Our work investigates the potentials and limitations of using LLMs for inductive reasoning tasks, proposing a unified framework which integrates LLMs with classic probabilistic rule combination methods, paving the way for more reliable neuro-symbolic reasoning systems.
Primary Area: interpretability and explainable AI
Submission Number: 25346
Loading