Abstract: While the abilities of language models are thoroughly evaluated in areas like general domains and biomedicine, academic chemistry remains less explored. Chemical QA tools also play a crucial role in both education and research by effectively translating complex chemical information into an understandable format. Addressing this gap, we introduce ScholarChemQA, a large-scale QA dataset constructed from chemical papers. Specifically, the questions are from paper titles with a question mark, and the multi-choice answers are reasoned out based on the corresponding abstracts. This dataset reflects typical real-world challenges, including an imbalanced data distribution and a substantial amount of unlabeled data that can be potentially useful. Correspondingly, we introduce a ChemMatch model, specifically designed to effectively answer chemical questions by fully leveraging our collected data. Experiments show that Large Language Models (LLMs) still have significant room for improvement in the field of chemistry. Moreover, ChemMatch significantly outperforms recent similar-scale baselines: https://github.com/iriscxy/chemmatch . Question Answering (QA) models have emerged as crucial tools for acquiring knowledge and evaluating domain-specific abilities, however, the domain of chemical QA remains underexplored. Here, the authors report ScholarChemQA as a large-scale QA dataset and introduce a ChemMatch model for effectively answering chemical questions and acquiring chemical-related knowledge.
External IDs:doi:10.1038/s42004-024-01394-x
Loading