Abstract: Large Language Models (LLMs) lack the ability for commonsense reasoning and learning from text. In this work, we present a system, called LLM2LAS, for learning commonsense knowledge from story-based question and answering expressed in natural language. LLM2LAS combines the semantic parsing capability of LLMs with ILASP for learning commonsense knowledge expressed as answer set programs. LLM2LAS requires only few examples of questions and answers to learn general commonsense knowledge and correctly answer unseen questions. An empirical evaluation demonstrates the viability of our approach.
Loading