Unlearning Climate Misinformation in Large Language Models

Published: 18 Jun 2024, Last Modified: 02 Jul 2024ClimateNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: climate change, large language models, LLM, global warming, misinformation, disinformation, model poisoning, unlearning, forgetting, aligning
TL;DR: Evaluates LLMs for factual accuracy on climate change information, investigates detectability of models poisoned with climate change misinformation, and compares methods for re-aligning poisoned models
Abstract: Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q\&A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model's responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.
Archival Submission: arxival
Submission Number: 24
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview