Trace Reconstruction with Language Models

ICLR 2026 Conference Submission17039 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Trace reconstruction, DNA data storage, Deletion-insertion-substitution errors, Decoder-only language models
TL;DR: We propose TReconLM, a language model trained on synthetic and real-world data for trace reconstruction, which outperforms existing methods by recovering more sequences corrupted by insertions, deletions, and substitutions.
Abstract: The general trace reconstruction problem seeks to recover an original sequence from its noisy copies independently corrupted by deletions, insertions, and substitutions. This problem arises in applications such as DNA data storage, a promising storage medium due to its high information density and longevity. However, errors introduced during DNA synthesis, storage, and sequencing require correction through algorithms and codes, with trace reconstruction often used as part of data retrieval. In this work, we propose TReconLM, which leverages a language model trained on next-token prediction for trace reconstruction. We pretrain the model on synthetic data and fine-tune on real-world data to adapt to technology-specific error patterns. TReconLM outperforms state-of-the-art trace reconstruction algorithms, including prior deep learning approaches, recovering a substantially higher fraction of sequences without error.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 17039
Loading