DNA is a promising storage medium due to its high information density and longevity. However, the storage process introduces errors, thus algorithms and codes are required for reliable storage. A common important step in the recovery of the information from DNA is trace reconstruction. In the trace reconstruction problem, the goal is to construct a sequence from noisy copies corrupted by deletion, insertion, and substitution errors. In this paper, we propose to use language models trained with next-token prediction for trace reconstruction. A simple channel model for the DNA data storage pipeline allows for self-supervised pretraining on large amounts of synthetic data. Additional finetuning on real data enables us to adapt to technology-dependent error statistics. The proposed method (TReconLM) outperforms state-of-the-art trace reconstruction algorithms for DNA data storage, often recovering significantly more sequences.
Keywords: DNA Data Storage, Trace Reconstruction, Language Models
Abstract:
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11404
Loading