Abstract: Large Language Models (LLMs) have poorer performance on multilingual reasoning tasks than on English tasks due to limited pretraining data for these languages. In this paper, we propose Knowledge Funnel, a novel multilingual reasoning framework that improves LLM performance through four steps: (1) Multilingual Knowledge Alignment, which enhances reasoning by leveraging English knowledge; (2) Entity-Structured Knowledge, which extracts a structured representation of the question (3) Dependency Knowledge, which captures language-specific dependencies such as units and quantifiers; (4) Calculation and Answer Generation, which ensures accurate reasoning results. Furthermore, it can be combined with other approaches, such as CoT, to achieve even better results. Our framework achieves 11.3% and 11.1% improvements over Chain-of-Thought (CoT) methods on MGSM8K and MSVAMP, demonstrating its effectiveness in enhancing LLMs' multilingual reasoning capabilities. We will release our code once acceptance.
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: multilingualism
Contribution Types: Model analysis & interpretability
Languages Studied: English, Chinese
Submission Number: 5825
Loading