Assessing and alleviating state anxiety in large language models

Published: 01 Jan 2025, Last Modified: 24 Sept 2025npj Digit. Medicine 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate “anxiety” in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.
Loading