everyone
since 09 May 2025">EveryoneRevisionsBibTeXCC BY 4.0
Recent advancements in large language models (LLMs) have substantially improved models' performance in auto-formalization and auto-informalization task. However, existing approaches suffer from three key limitations: (1) isolated treatment of these dual tasks despite their inherent complementarity, (2) decoupled optimization of model training and inference phases, and (3) under-explored collaboration potential among different LLMs. To address these challenges, we propose JAFI, a unified framework that integrates training and inference while jointly modeling auto-formalization and auto-informalization, through modular collaboration among specialized components. We evaluate JAFI on the AMR and miniF2F datasets, which utilize Lean 3 and Lean 4, respectively. The results demonstrate that JAFI significantly surpasses existing methods across both tasks. Comprehensive ablation studies further corroborate the effectiveness of its meticulously designed modules. Additionally, JAFI's superiority is validated by its performance in the ICML 2024 Challenges on Automated Math Reasoning. Code and datasets are available at https://anonymous.4open.science/r/JAFI-EDBC.