Abstract: Tetris is one of the most popular video games ever created, perhaps in part because its difficulty makes it addictive.
In this course project, we successfully trained a DQN agent in a simplified Tetris environment with state-action pruning.
This simple agent is able to deal with the Tetris Problem with reasonable performance.
We also applied several state of the art reinforcement learning algorithms such as Dreamer, DrQ, and Plan2Explore in the real-world Tetris game environment.
We augment the Dreamer algorithm with imitation learning as Lucid Dreamer algorithm.
Our experiments demonstrate that the mentioned state of art methods and their variants fail to play the original Tetris game.
The complex state-action space make original Tetris a quite difficult game for non-population based reinforcement learning agents.
Video link and related resources: https://drive.google.com/drive/folders/14aounKtRyg28azhtPcRsnymKdlhAgfDM?usp=sharing
3 Replies
Loading