Geometry Informed Tokenization of Molecules for Language Model Generation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Abstract: We consider molecule generation in 3D space using language models (LMs), which requires discrete tokenization of 3D molecular geometries. Although tokenization of molecular graphs exists, that for 3D geometries is largely unexplored. Here, we attempt to bridge this gap by proposing a novel method which converts molecular geometries into SE(3)-invariant 1D discrete sequences. Our method consists of canonical labeling and invariant spherical representation steps, which together maintain geometric and atomic fidelity in a format conducive to LMs. Our experiments show that, when coupled with our proposed method, various LMs excel in molecular geometry generation, especially in controlled generation tasks. Our code has been released as part of the AIRS library (https://github.com/divelab/AIRS/).
Lay Summary: Designing new molecules with desired properties is a central challenge in drug discovery and material science. While recent AI models have made strides in generating molecules as strings (like text), they often ignore the molecule’s 3D shape—crucial for how it behaves in the real world. Our research tackles this problem by introducing a method called Geo2Seq, which teaches language models to understand and generate molecules using both their atomic structures and their 3D geometry. We designed a novel way to convert 3D molecules into sequences that retain spatial information, allowing language models to “read” and “write” molecules more effectively. We also built a geometric-aware decoder that reconstructs full 3D molecules from the generated sequences. This two-part approach makes it possible to generate realistic, valid, and diverse 3D molecular structures. Our method improves how well AI can design new molecules with specific geometric properties, which could accelerate the discovery of new drugs and materials. By combining powerful language models with 3D chemical knowledge, Geo2Seq bridges a major gap between structure-based science and generative AI.
Link To Code: https://github.com/divelab/AIRS/tree/main/OpenMol/Geo2Seq
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Language models
Submission Number: 14772
Loading