Kimina Lean Server: A High-Performance Lean Server for Large-Scale Verification

Published: 17 Oct 2025, Last Modified: 21 Nov 2025MATH-AI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: formal mathematics, Lean, formal verification, reinforcement learning
TL;DR: We introduce an open-source server for fast and scalable interaction with Lean 4, specifically designed as a high-performance verifier for reinforcement learning pipelines.
Abstract: Recent progress in neural theorem proving has been driven by the training of large language models on Lean 4 problems via reinforcement learning, a process that requires fast and scalable verification of proofs. We introduce the Kimina Lean Server, an open-source project designed as a high-performance verifier for reinforcement learning pipelines Built on top of the Lean REPL (Read-Eval-Print Loop) maintained by the Lean Focused Research Organization (FRO), our server combines server-side parallelism by managing multiple Lean processes in parallel with an LRU (Least Recently Used) caching mechanism that reuses Lean imports across requests. On the client side, a lightweight Python package enables submitting proof batches and receiving Lean feedback, including extracted tactics and tactic states. Together, these features enable a scalable workflow for large-scale verification and data extraction. In our experiments, the Kimina Lean Server outperforms previous Lean interaction tools, achieving a 1.5 to 2 times speedup in verification time. Moreover, its improved efficiency has enabled its use in the large-scale training of state-of-the-art models. We hope that our open-source project will support the neural theorem proving community and accelerate future progress by enabling efficient large-scale verification and proof data extraction.
Supplementary Material: zip
Submission Number: 133
Loading