Abstract: Pre-trained large language models (LLMs) are a powerful platform for building custom models for various applications.
They have also found success in chemistry, but typically need to be pre-trained on large chemistry datasets such as reaction databases or protein sequences.
In this work, we analyze whether one of the largest pre-trained LLMs, GPT-3, can be directly used for chemistry applications by fine-tuning on only a few data points from a chemistry dataset, i.e., without pre-training on a chemistry-specific dataset.
We show that GPT-3 can achieve performance competing with baselines on three case studies (polymers, metal-organic frameworks, photoswitches) with representations as simple as the chemical name in both classification and regression settings.
Moreover, we demonstrate that GPT-3 can also be fine-tuned for use in inverse design tasks, i.e., to generate a molecule that has properties as specified in a prompt.
Paper Track: Papers
Submission Category: AI-Guided Design
0 Replies
Loading