Climate Change from Large Language Models

ACL 2024 Workshop ClimateNLP Submission3 Authors

05 May 2024 (modified: 18 Jun 2024)Submitted to ClimateNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: climate change, large language models, knowledge evaluation
TL;DR: We propose an automated framework to evaluate climate knowledge within large language models through a hybrid data acquisition approach and comprehensive evaluation metrics, revealing the need for continuous updating of climate-related content.
Abstract: Climate change poses grave challenges, demanding widespread understanding and low-carbon lifestyle awareness. Large language models (LLMs) offer a powerful tool to address this crisis, yet comprehensive evaluations of their climate-crisis knowledge are lacking. This paper proposes an automated evaluation framework to assess climate-crisis knowledge within LLMs. We adopt a hybrid approach for data acquisition, combining data synthesis and manual collection, to compile a diverse set of questions encompassing various aspects of climate change. Utilizing prompt engineering based on the compiled questions, we evaluate the model's knowledge by analyzing its generated answers. Furthermore, we introduce a comprehensive set of metrics to assess climate-crisis knowledge, encompassing indicators from 10 distinct perspectives. These metrics provide a multifaceted evaluation, enabling a nuanced understanding of the LLMs' climate crisis comprehension. The experimental results demonstrate the efficacy of our proposed method. In our evaluation utilizing diverse high-performing LLMs, we discovered that while LLMs possess considerable climate-related knowledge, there are shortcomings in terms of timeliness, indicating a need for continuous updating and refinement of their climate-related content.
Archival Submission: arxival
Arxival Submission: arxival
Submission Number: 3
Loading