Procedural Generation of Semantically Correct Levels in Video Games using Reward Shaping

Published: 20 Jun 2025, Last Modified: 22 Jul 2025RLVG Workshop - RLC 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: PCGRL
TL;DR: Reward shaping can be utilised to produce semantically correct levels.
Abstract: The generation of video game levels traditionally relies on manual efforts from skilled professionals, resulting in significant expenses and time commitments. Procedural generation offers a solution by automating this process, reducing costs but potentially sacrificing designer control. The drawback of diminished control is that it has limited the widespread adoption of procedural generation due to concerns about the quality of the generated levels. Various approaches, including reinforcement learning and evolutionary algorithms, have been explored to address this limitation by improving how procedurally generated levels align with designer constraints. However, a key challenge remains in designing reward schemes or evaluation functions that accurately capture these constraints. To tackle this challenge, this paper proposes a system utilizing semantically appropriate reward shaping in a reinforcement learning setting for procedural content generation. By integrating an additional shaping function into the reward mechanism, this system generates diverse video game levels in the Zelda Gym environment that meet designers' specific requirements and constraints.
Submission Number: 7
Loading