Assessing Inherent Biases Following Prompt Compression of Large Language Models for Game Story Generation
Abstract: This paper investigates how prompt compression, a technique to reduce the number of tokens in the prompt while maintaining prompt performance, affects inherent biases in large language models (LLMs) for the story ending of the game story generation task. Previous studies have explored inherent biases in LLMs and found an innate inclination of LLMs towards generating positive-ending stories. While prompt compression is known to retain task performance and utilize fewer tokens in the prompt, we explore a different perspective on how prompt compression could affect inherent biases in LLMs. We follow existing studies’ approach in evaluating story ending biases of six LLMs comparing uncompressed and compressed prompts. We find that prompt compression does not affect story generation from positive-ending story synopses, to which these LLMs are inclined. However, it is not the same for negative-ending story synopses: prompt compression either makes the LLMs generate a higher amount of negative-ending stories or not at all. We also notice that the classification of other types of story endings, other than those specified in the prompt, aligns with an existing study. We recommend game developers and future studies to always perform empirical tests on prompt compression, as it is not straightforward and may greatly alter model behaviors.
Loading